How to change transparency of texture in shader - opengl-es

I wanna create shader what will be changing alpha color of texture from invisible to visible.
I've seen method to change gl_FragColor.a. But for me it doesn't work. What ever i setting there always all screen is black.
precision mediump float;
uniform vec4 vColor;
uniform sampler2D uTexture;
varying vec2 vTexCoordinate;
uniform float alphaMod;
void main(){
gl_FragColor = texture2D(uTexture, vTexCoordinate);
gl_FragColor.a = alphaMod;
}
Modified object should be barely visible but for now is invisible.

Related

Three.js renders unprocessed png image for texture

With three.js, I am trying to create the scene where a plane becomes transparent as the camera moves away from it.
And I textured the plane object with the round map tile which is edited from the square image below.
When I load the round image through ShaderMaterial the texture appears square like the original image.
The weird thing is it is rendered as intended when the image is loaded onto regular mesh material.
Could you tell me why three.js behaves this way? Also, how might I render round tile using shader while keeping its functionality to fade based on distance?
the full code is available here: https://codesandbox.io/s/tile-with-shader-7kw5v?file=/src/index.js
Here is an option, that takes in count only x and z coords of the plane and the camera.
vertex.glsl:
varying vec4 vPosition;
varying vec2 vUv;
void main() {
vPosition = modelMatrix * vec4(position, 1.);
vUv = uv;
gl_Position = projectionMatrix * viewMatrix * vPosition;
}
frag.glsl:
uniform vec3 u_color;
uniform vec3 u_camera;
uniform vec3 u_plane;
uniform float u_rad;
uniform sampler2D u_texture;
varying vec4 vPosition;
varying vec2 vUv;
void main() {
vec4 textureCol = texture2D(u_texture, vUv);
float rad = distance(vPosition.xz, u_camera.xz); // xz-plane
textureCol.a = 1.0 - step(1., rad / u_rad);
gl_FragColor = textureCol;
}
and u_rad uniform is
u_rad: { value: 50 },

Three.js THREE.InstancedBufferGeometry multiple sampler2D vertical tearings

I have created a RawShaderMaterial for an InstancedBufferGeometry object that is rendering well with 1 sampler2D uniform. As soon as it uses a second sampler2D uniform, it renders with a lot of vertical tearings
Here is the fragment shader:
precision highp float;
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying float vindex;
varying vec2 vUv;
varying vec3 mapcolor;
vec4 gettexture(){
vec4 color;
if(vindex==0.){
color = texture2D(utexture1, vUv)*vec4(mapcolor,1.);
}else if(vindex==1.){
color = texture2D(utexture1, vUv)*vec4(mapcolor,1.);
}else if(vindex==2.){
color = texture2D(utexture2, vUv)*vec4(mapcolor,1.);
}else if(vindex==3.){
color = texture2D(utexture2, vUv)*vec4(mapcolor,1.);
}
return color;
}
void main() {
gl_FragColor = gettexture();
}
Notes: The 2 textures used for the sampler2D have the same size (512x512) and they are loaded before material creation.
Anyone knows where these vertical tearings come from?
Thank you in advance for your help!

How to write a only alpha > 0 to the stencil buffer?

I am trying to render a texture to the stencil buffer. I only need pixels where their alpha is > 0, but my code is rendering every pixel of my quad - even the ones with 0 alpha. How can I avoid this?
Heres my code:
GL.StencilOp(StencilOp.Keep, StencilOp.Keep, StencilOp.Incr);
GL.ColorMask(false, false, false, false);
GL.DepthMask(false);
RenderMask(mask);
GL.StencilFunc(StencilFunction.Equal, 1, 0xFF);
GL.StencilOp(StencilOp.Keep, StencilOp.Keep, StencilOp.Keep);
GL.ColorMask(true, true, true, true);
GL.DepthMask(true);
When debugging with RenderDoc I see that the stencil buffer contains 1s where my texture is... but its a rectangle, it does not take alpha into account.
Heres my fragment shader (it works fine for normal rendering):
varying lowp vec4 vColor;
varying lowp vec2 vTexCoords;
uniform lowp sampler2D uTexture;
void main() {
gl_FragColor = texture2D(uTexture, vTexCoords) * vColor;
}
Use a "discard" statement in the shader to drop the fragments you don't want to keep.
varying lowp vec4 vColor;
varying lowp vec2 vTexCoords;
uniform lowp sampler2D uTexture;
void main() {
vec4 color = texture2D(uTexture, vTexCoords) * vColor;
if (color.a == 0.0) {
discard;
}
gl_FragColor = color;
}

OpenGLES 2.0 set vertex colors

I am creating a drawing app and need to change the colors periodically. So, one point might be green, another red.
I'm trying to do it as follows:-
program
glBindAttribLocation(_program, ATTRIB_COLOR, "color");
vertex shader
attribute vec4 position;
attribute float size;
attribute vec4 color;
varying vec4 fragColor;
void main()
{
gl_Position = position;
gl_PointSize = 30.0;
fragColor = color;
}
Fragment shader
precision mediump float;
varying vec4 fragColor;
void main() {
gl_FragColor = fragColor;
}
The problem is, the color varies depending upon where the point is positioned on the screen. If I set red as the color of the attribute I need it to be pure red wherever the point appears on screen.
The problem was not related to the above code. I had misaligned the attribute data being sent to the shader by the program.

Desktop GLSL without ftransform()

I'm porting a codebase of mine from fixed-function OpenGL 1.x to OpenGL 2.x - Technically OpenGL ES 2.0, but I'm still coding on the desktop, just keeping in mind the limitations that ES 2.0 imposes which are similar to the 3.1 'new' profile.
Problem is, it seems like for anything other than 2D, creating a shader passing in the modelviewprojection matrix as a uniform does not work. Normally I get a black screen, but if I set the Z value of all my vertices to 0 I get stuff to show up.
Putting my shaders in RenderMonkey works when I have ES 2.0 mode enabled, but on standard desktop GL it's just a black screen (no compiler errors/warnings):
vert shader:
uniform mat4 mvp_matrix;
uniform mat4 obj_matrix;
uniform vec4 u_color;
attribute vec3 a_vertex;
attribute vec2 a_texcoord0;
varying vec4 v_color;
varying vec2 v_texcoord0;
void main(void)
{
v_color = u_color;
gl_Position = mvp_matrix * (obj_matrix * vec4(a_vertex, 1.0));
v_texcoord0 = a_texcoord0;
}
frag shader:
uniform sampler2D t_texture0;
varying vec2 v_texcoord0;
varying vec4 v_color;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * v_color;
}
I am passing in the matrices as glUniformMatrix4fv(location, 1, GL_FALSE, mvpMatrix);
This shader works like gold for anything drawn in 2D. What am I doing wrong here? Or am I required to use ftransform() on desktop GL?
One thing I think needs a bit of clarification:
A model matrix transforms an object from object coordinates to world coordinates.
A view matrix transforms the world coordinates to eye coordinates.
A projection matrix converts eye coordinates to clip coordinates.
Based on standard naming conventions, the mvpMatrix is projection * view * model, in that order. There is no other matrices that you need to multiply by. Projection is your projection matrix (either ortho or perspective), view is the camera transform matrix (NOT the modelview), and model is the position, scale, and rotation of your object.
I believe the issue either lies in either multiplying matrices that don't need to be multiplied together or in multiplying matrices in the wrong order. (matrix multiplication isn't commutative)
If you haven't already solved this, I would recommend sending all 3 matrices over separately and later dumping the values back to make sure there are no issues sending the matrices over.
Vertex shader:
attribute vec4 a_vertex;
attribute vec2 a_texcoord0;
varying vec2 v_texcoord0;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main(void)
{
gl_Position = projection * view * model * a_vertex;
v_texcoord0 = a_texcoord0;
}
Fragment Shader:
uniform sampler2D t_texture0;
uniform vec4 u_color;
varying vec2 v_texcoord0;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * u_color;
}
Also I moved the color uniform to the frag shader, passing it through as a varying is unnecessary when all the vertices will have the same color.

Resources