Threejs Shader - gl_FragColor with Alpha (opacity not changing) - three.js

I'm trying to write a simple shader where half of my scene will be displayed and half of the scene will be transparent. I can't seem to figure out why the transparency isn't working:
uniform sampler2D tDiffuse;
varying vec2 vUv;
void main(){
vec2 p = vUv;
vec4 color;
if(p.x < 0.5){
color = (1.0, 0.0, 0.0, 0.0);
}else{
color = texture2D(tDiffuse, p);
}
gl_FragColor = color;
}
The shader is definitely running without errors - the right half of the screen is my threejs scene and the left half of the screen is red (when it should really be transparent). I've read that maybe I need to call glBlendFunc(GL_SRC_ALPHA); - but I am getting errors when I try this. To do this I did renderer.context.blendFuncSeparate(GL_SRC_ALPHA); in my main js file (not the shader). Am I supposed to place this somewhere else to make it work?
Any help would be appreciated. For reference I'm applying my shader with the standard effectsComposer, shaderPass, etc - which most threejs shader examples use.
Thanks in advance for your help!!!

It is difficult to help you with only partial information and code snippets, so I can only make educated guesses.
By default EffectComposer uses a render target with RGB format. Did specify RGBA?
Did you specify material.transparent = true?
three.js r.56

I had this problem and for me it was that the material didn't have transparency enabled.
let myMaterial = new THREE.ShaderMaterial({
uniforms: myUniforms,
fragmentShader: myFragmentShader(),
vertexShader: myVertexShader(),
});
myMaterial.transparent = true;

Related

Blending issues between InstancedMesh using a ShaderMaterial (trying to reproduce mix-blend-mode: overlay from Figma)

I've been working on blending between InstancedMesh using a ShaderMaterial. The blending between the InstancedMesh works fine with THREE.JS blending mode (Additive, Substractive, ...) but i struggle to match the Figma design with thoses (See the attached screenshots, first is the current result with THREE.JS using Additive blend mode, second is the result i need from Figma).
Current Result using Additive Blending
Result needed from Figma
A mix-blend-mode: overlay is used on Figma between all the circles (the InstancedMesh in Three.js), so i tried to add some glsl blend overlay code (https://github.com/jamieowen/glsl-blend) like that in the ShaderMaterial used by the InstancedMesh :
uniform vec3 uColors[6];
uniform float uThresholds[6];
varying vec2 vUv;
float blendOverlay(float base, float blend) {
return base<0.5?(2.0*base*blend):(1.0-2.0*(1.0-base)*(1.0-blend));
}
vec3 blendOverlay(vec3 base, vec3 blend) {
return vec3(blendOverlay(base.r,blend.r),blendOverlay(base.g,blend.g),blendOverlay(base.b,blend.b));
}
vec3 blendOverlay(vec3 base, vec3 blend, float opacity) {
return (blendOverlay(base, blend) * opacity + base * (1.0 - opacity));
}
void main() {
vec3 color = mix(uColors[0], uColors[1], smoothstep(uThresholds[0], uThresholds[1], vUv.y));
color = mix(color, uColors[2], smoothstep(uThresholds[1], uThresholds[2], vUv.y));
color = mix(color, uColors[3], smoothstep(uThresholds[2], uThresholds[3], vUv.y));
color = mix(color, uColors[4], smoothstep(uThresholds[3], uThresholds[4], vUv.y));
color = mix(color, uColors[5], smoothstep(uThresholds[4], uThresholds[5], vUv.y));
gl_FragColor = vec4(blendOverlay(color.rgb, ???), 1.0);
}
I understood that i would need some sampler2D uniforms to use them as texture2D for the blendOverlay method to work, but my problem is how can i render those textures ?
If the overlay was just between a "background" and all the InstancedMesh i could render the InstancedMesh in a renderTarget once and use it as a texture. But here i need the overlay blending to be between all the InstancedMesh objects. Should i render the InstancedMesh one by one in a renderTarget and store every textures ? I'm a bit lost here hehe.

Having some wierd artifacting and odd triangle shadows with SSAO Opengl Implmentation

I have been working on implementing SSAO into the engine I am writing, and a major problem has arrived. Everything was going quite well until I realized that my SSAO was not working correctly. There are two things that I can find that are wrong with my SSAO and I am unable to figure out how to remedy them.
My shader code is at the end of this post, before that I will be describing the problems with images.
Firstly, as seen in the below screenshot, there are some wierd artifacts showing up based on the angle of viewing. So far I am assuming the way I am implementing the View matrix is wrong. I have done a lot of research about how this all should work and I understand it in theory. However, in practice things are not changing as I would expect.
Secondly, whenever I get close to the blocks, I get very odd triangle shadows that appear around the edges of the screen, as shown in the next screenshot.
[![Odd triangle shadows around screen][2]][2]
These two images show the main issues I am having. I am using a deferred type Renderer to render the geometry to a few textures (Position, normals, color) the importing these textures and using them to manipulate the final output. The first two codeblocks are the vertex and fragment shaders respectively for translating the geometry to textures.
Vertex Shader
#version 430 core
layout(location=0) in mat4 modelMatrix;
layout(location=4) in vec4 VertexPosition;
layout(location=5) in vec4 VertexNormal;
layout(location=6) in vec3 VertexColor;
layout(location=7) in vec2 TextureCoords;
out vec4 vNormal;
out vec3 vColor;
out vec4 shaderCoord;
out vec2 texCoords;
layout(location=8) uniform mat4 V;
layout(location=12) uniform mat4 P;
void main()
{
shaderCoord = (V*modelMatrix * VertexPosition);
mat4 normalMatrix = transpose(inverse(V*modelMatrix));
vNormal = (normalMatrix*VertexNormal);
texCoords = TextureCoords;
vColor = VertexColor;
gl_Position = P*shaderCoord;
}
Fragment Shader
#version 430 core
in vec4 vNormal;
in vec3 vColor;
in vec4 shaderCoord;
in vec2 texCoords;
layout (location=0) out vec4 NormalBuffer;
layout (location=1) out vec4 ColorBuffer;
layout (location=2) out vec4 PositionBuffer;
layout (location=3) out vec4 TextureCoordBuffer;
out float fragDepth;
//Start of the main function.
void main()
{
NormalBuffer = vec4(normalize(vNormal).xyz, 1.0);
ColorBuffer = vec4(vColor, 1.0);
PositionBuffer = vec4(shaderCoord.xyz, 1.0);
TextureCoordBuffer = vec4(texCoords, 0.0, 1.0);
fragDepth = gl_FragCoord.z;
}
As you can see, I am translating everything from world space to view space before I write them to the textures. I would much prefer to keep them in world space but when I do, the entire screen looks white with occasional hints of shadows, but the background swaps between white and black depending on camera angle.
Next are my SSAO shaders, In order to implement these I followed a few tutorials, so they probably look familiar. If the tutorial was correct, the next two shaders should work correctly but they are not.
Vetex shader that just creates a quad, and applies the final texture to it.
#version 430 core
layout (location=0) in vec3 VertexPosition;
layout (location=1) in vec2 TextureCoords;
out vec2 texCoords;
void main (){
texCoords = TextureCoords;
gl_Position = vec4(VertexPosition, 1.0);
}
Fragment shader for SSAO
#version 430 core
in vec2 texCoords;
layout (location=0) out vec4 fColor;
uniform sampler2D NormalBuffer;
uniform sampler2D positionBuffer;
uniform sampler2DArrayShadow shadowMap;
uniform sampler1D SSAOKernelMap;
uniform sampler2D SSAONoiseMap;
layout(location=12) uniform mat4 P;
layout(location=8) uniform mat4 V;
uniform uint kernelSize;
uniform vec2 windowSize;
//Define Variables for SSAO Processing.
float radius = 0.5;
float SSAOBias = 0.025;
float power = 1.5;
//mat4 biasMatrix = mat4(0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.5,0.5,0.5,1.0);
void main()
{
//Retrieve from textures
vec3 shaderCoord = (texture(positionBuffer, texCoords)).xyz;
vec3 vNormal = normalize((texture(NormalBuffer, texCoords)).rgb);
//process SSAO
vec2 NoiseScale = vec2(windowSize.x/4.0, windowSize.y/4.0);
vec3 randVec = normalize(texture(SSAONoiseMap, texCoords*NoiseScale).xyz);
vec3 tangent = normalize(randVec - vNormal * dot(randVec, vNormal));
vec3 bitTangent = cross(vNormal, tangent);
mat3 TBN = mat3(tangent, bitTangent, vNormal);
//Begin Processing of SSAO with inputed Kernel Samples
float Occlusion = 0.0;
for(int i=0; i<kernelSize; i++){
vec4 kernelSample = texture(SSAOKernelMap, i);
vec3 TSample = TBN*kernelSample.rgb;
TSample = shaderCoord + TSample * radius;
vec4 newCoord = vec4(TSample, 1.0);
newCoord = P*newCoord;
newCoord.xyz /= newCoord.w;
newCoord.xyz = newCoord.xyz * 0.5 + 0.5;
float sampleDepth = texture(positionBuffer,newCoord.xy).z;
//float rangeCheck = smoothstep(0.0,1.0, radius / abs(shaderCoord.z-sampleDepth));
Occlusion += (sampleDepth >= TSample.z+SSAOBias?1.0:0.0);
}
Occlusion = 1.0 - (Occlusion/kernelSize);
fColor = vec4(vec3(Occlusion),1.0f);
}
That is all the information I can think to provide initially. Any help you guys can provide would be immensely helpful! If any other information would help, please let me know and I will be happy to provide.
EDIT:
I figured out that one of my issues was the way that I was accessing the 1D texture above. This made all the kernel samples very strange. I fixed that and now I am getting something like the image below, where half the screen is darker and half the screen is lighter on one side and darker on the other. The contrast line moves with the camera.
Any help with this issue would be immensely appreciated!
I have found two things that were wrong that mostly resolved the issue that this current post is about.
Firstly, the format which I was passing in the kernelMap was off and so all the values were quite skewed.
Secondly, I was unable to figure out why but when I passed the position and normal values to the Lightingfragment shader in world space and then applied the view and projection matrices to them, they would turn out very strangely. However if I applied the view and projection matrices to the position and normal values in the BaseGeometry shader, then reverted that application in the Lighting shader everything works perfectly.
If i find out any more information I will happily post here to update any future searchers.

Why do I need to define a precision value in webgl shaders?

I'm trying to get this tutorial to work but I ran into two issues, one of which is the following.
When I run the code as is I get an error in the fragment shader saying: THREE.WebGLShader: gl.getShaderInfoLog() ERROR: 0:2: '' : No precision specified for (float). So what I did was specifying a precision for every float/vector I define like so varying highp vec3 vNormal. This eliminates the error but I don't get why? I can't find any other example where precision values are added to variable declarations. Can anybody explain why this occurs? Does it have something to do with my Browser (Chrome 38)?
There is no default precision in WebGL fragment shaders. (High precision is default for vertex shaders.) The easiest solution is to add
precision highp float;
to all of your fragment shaders, which will eliminate the need to define the precision for all floating point vector variables, but generally,
precision mediump float;
will be preferable, for performance. I do not advise lowp; the good mobile hardware of today doesn't even support it anymore, and does the equivalent of typedeffing lowp to mediump.
Jessy's answer is correct that most fragment shaders set a default precision at the top of the fragment shader code.
However you are using Three.js's RawShaderMaterial which does not prepend any of the built-in uniforms, attributes, and precision declarations. So you have to define it yourself.
On the other hand the tutorial you linked to is using Three.js's ShaderMaterial for its material so Three.js will have the precision declaration prepended automatically.
If you remove the default uniforms/attributes from your shader code and use ShaderMaterial instead it will work without the precision code.
Vertex Shader
varying vec3 vNormal;
void main() {
vNormal = normal;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
Fragment Shader
varying vec3 vNormal;
void main() {
vec3 light = vec3(0.5, 0.2, 1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, // R
dProd, // G
dProd, // B
1.0); // A
}
Update to the material
// create the sphere's material
var shaderMaterial = new THREE.ShaderMaterial({
vertexShader: document.getElementById('vertex-shader').innerHTML,
fragmentShader: document.getElementById('fragment-shader').innerHTML
});
Here is a fiddle of your code without the precision declarations.

How to implement a ShaderToy shader in three.js?

looking for info on how to recreate the ShaderToy parameters iGlobalTime, iChannel etc within threejs. I know that iGlobalTime is the time elapsed since the Shader started, and I think the iChannel stuff is for pulling rgb out of textures, but would appreciate info on how to set these.
edit: have been going through all the shaders that come with three.js examples and think that the answers are all in there somewhere - just have to find the equivalent to e.g. iChannel1 = a texture input etc.
I am not sure if you have answered your question, but it might be good for others to know the integration steps for shadertoys to THREEJS.
First, you need to know that shadertoys is a fragment shaders. That being said, you have to set a "general purpose" vertex shader that should work with all shadertoys (fragment shaders).
Step 1
Create a "general purpose" vertex shader
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
This vertex shader is pretty basic. Notice that we defined a varying variable vUv to tell the fragment shader where is the texture mapping. This is important because we are not going to use the screen resolution (iResolution) for our base rendering. We will use the texture coordinates instead. We have done that in order to integrate multiple shadertoys on different objects in the same THREEJS scene.
Step 2
Pick the shadertoys that we want and create the fragment shader. (I have chosen a simple toy that performs well: Simple tunnel 2D by niklashuss).
Here is the given code for this toy:
void main(void)
{
vec2 p = gl_FragCoord.xy / iResolution.xy;
vec2 q = p - vec2(0.5, 0.5);
q.x += sin(iGlobalTime* 0.6) * 0.2;
q.y += cos(iGlobalTime* 0.4) * 0.3;
float len = length(q);
float a = atan(q.y, q.x) + iGlobalTime * 0.3;
float b = atan(q.y, q.x) + iGlobalTime * 0.3;
float r1 = 0.3 / len + iGlobalTime * 0.5;
float r2 = 0.2 / len + iGlobalTime * 0.5;
float m = (1.0 + sin(iGlobalTime * 0.5)) / 2.0;
vec4 tex1 = texture2D(iChannel0, vec2(a + 0.1 / len, r1 ));
vec4 tex2 = texture2D(iChannel1, vec2(b + 0.1 / len, r2 ));
vec3 col = vec3(mix(tex1, tex2, m));
gl_FragColor = vec4(col * len * 1.5, 1.0);
}
Step 3
Customize the shadertoy raw code to have a complete GLSL fragment shader.
The first thing missing out the code are the uniforms and varyings declaration. Add them at the top of your frag shader file (just copy and paste the following):
uniform float iGlobalTime;
uniform sampler2D iChannel0;
uniform sampler2D iChannel1;
varying vec2 vUv;
Note, only the shadertoys variables used for that sample are declared, plus the varying vUv previously declared in our vertex shader.
The last thing we have to twick is the proper UV mapping, now that we have decided to not use the screen resolution. To do so, just replace the line that uses the IResolution uniforms i.e.:
vec2 p = gl_FragCoord.xy / iResolution.xy;
with:
vec2 p = -1.0 + 2.0 *vUv;
That's it, your shaders are now ready for usage in your THREEJS scenes.
Step 4
Your THREEJS code:
Set up uniform:
var tuniform = {
iGlobalTime: { type: 'f', value: 0.1 },
iChannel0: { type: 't', value: THREE.ImageUtils.loadTexture( 'textures/tex07.jpg') },
iChannel1: { type: 't', value: THREE.ImageUtils.loadTexture( 'textures/infi.jpg' ) },
};
Make sure the textures are wrapping:
tuniform.iChannel0.value.wrapS = tuniform.iChannel0.value.wrapT = THREE.RepeatWrapping;
tuniform.iChannel1.value.wrapS = tuniform.iChannel1.value.wrapT = THREE.RepeatWrapping;
Create the material with your shaders and add it to a planegeometry. The planegeometry() will simulate the shadertoys 700x394 screen resolution, in other words it will best transfer the work the artist intented to share.
var mat = new THREE.ShaderMaterial( {
uniforms: tuniform,
vertexShader: vshader,
fragmentShader: fshader,
side:THREE.DoubleSide
} );
var tobject = new THREE.Mesh( new THREE.PlaneGeometry(700, 394,1,1), mat);
Finally, add the delta of the THREE.Clock() to iGlobalTime value and not the total time in your update function.
tuniform.iGlobalTime.value += clock.getDelta();
That is it, you are now able to run most of the shadertoys with this setup...
2022 edit: The version of Shaderfrog described below is no longer being actively developed. There are bugs in the compiler used making it not able to parse all shaders correctly for import, and it doesn't support many of Shadertoy's features, like multiple image buffers. I'm working on a new tool if you want to follow along, otherwise you can try the following method, but it likely won't work most of the time.
Original answer follows:
This is an old thread, but there's now an automated way to do this. Simply go to http://shaderfrog.com/app/editor/new and on the top right click "Import > ShaderToy" and paste in the URL. If it's not public you can paste in the raw source code. Then you can save the shader (requires sign up, no email confirm), and click "Export > Three.js".
You might need to tweak the parameters a little after import, but I hope to have this improved over time. For example, ShaderFrog doesn't support audio nor video inputs yet, but you can preview them with images instead.
Proof of concept:
ShaderToy https://www.shadertoy.com/view/MslGWN
ShaderFrog http://shaderfrog.com/app/view/247
Full disclosure: I am the author of this tool which I launched last week. I think this is a useful feature.
This is based on various sources , including the answer of #INF1.
Basically you insert missing uniform variables from Shadertoy (iGlobalTime etc, see this list: https://www.shadertoy.com/howto) into the fragment shader, the you rename mainImage(out vec4 z, in vec2 w) to main(), and then you change z in the source code to 'gl_FragColor'. In most Shadertoys 'z' is 'fragColor'.
I did this for two cool shaders from this guy (https://www.shadertoy.com/user/guil) but unfortunately I didn't get the marble example to work (https://www.shadertoy.com/view/MtX3Ws).
A working jsFiddle is here: https://jsfiddle.net/dirkk0/zt9dhvqx/
Change the shader from frag1 to frag2 in line 56 to see both examples.
And don't 'Tidy' in jsFiddle - it breaks the shaders.
EDIT:
https://medium.com/#dirkk/converting-shaders-from-shadertoy-to-threejs-fe17480ed5c6

My fragment shader in WebGL program is setting all the colors from my texture to black

I have a simple game that uses three textures with transparent parts. I can see the silhouettes of my textures, but anywhere that doesn't alpha set to zero returns black (0, 0, 0, 1).
Here's my fragment shader:
precision mediump float;
// our texture
uniform sampler2D u_image0;
uniform sampler2D u_image1;
uniform sampler2D u_image2;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main() {
// Look up a color from the texture.
vec4 textureColor = texture2D(u_image0, v_texCoord);
if (textureColor.a < 0.5)
discard;
else
gl_FragColor = vec4(textureColor.rgb, textureColor.a);
vec4 textureColor1 = texture2D(u_image1, v_texCoord);
if (textureColor1.a < 0.5)
discard;
else
gl_FragColor = vec4(textureColor1.rgb, textureColor1.a);
vec4 textureColor2 = texture2D(u_image2, v_texCoord);
if (textureColor2.a < 0.5)
discard;
else
gl_FragColor = vec4(textureColor2.rgb, textureColor2.a);
I got the conditional that tests for alpha from another question, where pixels with zero alpha were being set to white. Solved my problem, but not sure if it scales properly to multiple textures. I'm pretty sure I'm doing it wrong.
Thanks in advance, and let me know if I need to add more code (vertex shader, etc).
It is unclear to me what you actually try to achieve.
The way you wrote this code makes me think that you do not know what the discard statement actually does: it completely discards the fragment, the current invocation of the shader will be aborted immediately.
What you shader does is just discard the whole fragment if any of the 3 textures has an alpha value below 0.5. The fact that you have written to gl_FragCoord before doing the discard does not matter at all. If all of the textures have the some alpha above 0.5, the final color will be that of u_image2.

Resources