Three.js, custom shader and png texture with transparency - three.js

I have an extremely simple PNG texture: a grey circle with a transparent background.
I use it as a uniform map for a THREE.ShaderMaterial:
var uniforms = THREE.UniformsUtils.merge( [basicShader.uniforms] );
uniforms['map'].value = THREE.ImageUtils.loadTexture( "img/particle.png" );
uniforms['size'].value = 100;
uniforms['opacity'].value = 0.5;
uniforms['psColor'].value = new THREE.Color( 0xffffff );
Here is my fragment shader (just part of it):
gl_FragColor = vec4( psColor, vOpacity );
gl_FragColor = gl_FragColor * texture2D( map,vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = gl_FragColor * vec4( vColor, 1.0 );
I applied the material to some particles (THREE.PointCloud mesh) and it works quite well:
But if i turn the camera of more than 180 degrees I see this:
I understand that the fragment shader is not correctly taking into account the alpha value of the PNG texture.
What is the best approach in this case, to get the right color and opacity (from custom attributes) and still get the alpha right from the PNG?
And why is it behaving correctly on one side?

Transparent objects must be rendered from back to front -- from furthest to closest. This is because of the depth buffer.
But PointCloud particles are not sorted based on distance from the camera. That would be too inefficient. The particles are always rendered in the same order, regardless of the camera position.
You have several work-arounds.
The first is to discard fragments for which the alpha is low. You can use a pattern like so:
if ( textureColor.a < 0.5 ) discard;
Another option is to set material.depthTest = false or material.depthWrite = false. You might not like the side effects, however, if you have other objects in the scene.
three.js r.71

Related

Texture lookup inside FBO simulation shader

I'm trying to make FBO-particle system by calculating positions in separate pass. Using code from this post now http://barradeau.com/blog/?p=621.
I render sphere of particles, without any movement:
The only thing i'm adding so far is a texture in simulation fragment shader:
void main() {
vec3 pos = texture2D( texture, vUv ).xyz;
//THIS LINE, pos is approx in -200..200 range
float map = texture2D(texture1, abs(pos.xy/200.)).r;
...
// save map value in ping-pong texture as alpha
gl_FragColor = vec4( pos, map );
texture1 is: half black half white.
Then in render vertex shader i read this map parameter:
map = texture2D( positions, position.xy ).a;
and use it in render fragment shader to see the color:
vec3 finalColor = mix(vec3(1.,0.,0.),vec3(0.,1.,0.),map);
gl_FragColor = vec4( finalColor, .2 );
So what i hope to see is: (made by setting same texture in render shaders)
But what i really see is: (by setting texture in simulation shaders)
Colors are mixed up, though mostly you can see more red ones where they should be, but there are a lot of green particles in between.
Also tried to make my own demo with simplified texture and same idea and i got this:
Also mixed up, but you can still guess image.
Same error.
I think i am missing something obvious. But i was struggling with this a couple of days now, not able to find a mistake by myself.
Would be very grateful for someone to point me in the right direction. Thank you in advance!
Demo with error: http://cssing.org.ua/examples/fbo-error/
Full code i'm referring: https://github.com/akella/fbo-test
You should disable texture filtering by using GL_NEAREST min/mag filters.
My guess is that THREE.TextureLoader() loads texture with mipmaps and texture2D call in vertex shader uses the lowest-res mipmap. In vertex shaders you should use texture2DLod(texture, texCoord, 0.0) - note the 3rd param, lod, which specifies 0 mipmap level.

Perpendicular falloff material

I want to make a falloff semi-transparent shader, opaque when normals are perpendicular to camera direction and transparent when normals face towards the camera. Here is the code I use so far :
vec3 vertexNormal = normalize( normalMatrix * normal );
vec3 viewDir = vec3( 0.0, 0.0, 1.0 );
float dotProd = dot(vertexNormal, viewDir);
alpha = abs ( 1.0 - dotProd );
It works but when the objects are not located in the center of the camera view, the falloff isn't consistent anymore, farther side have a larger falloff :
Falloff larger towards edge of camera view
Is there a way to get consistent falloff thickness all over the camera view (all sphere would be distorded by perspective but the falloff contour would be the same everywhere) ?
Thanks in advance!
Unless you’re using an orthographic camera your view dir is incorrect.
Try
vec4 vp = modelViewMatrix * vec4( position, 1.);
vec3 viewdir = - normalize(vp.xyz);

Threejs: PointCloudMaterial size compared to ShaderMaterial gl_PointSize with size attenuation

I am curious to know what the relationship is between gl_PointSize and the size property within PointCloudMaterial.
When I create a PointCloud with PointCloudMaterial and set the size property to 1, the size of the particles are far larger than when creating a PointCloud with a ShaderMaterial and setting the size parameter for the vertex shader to 1. I also account for size attenuation like in the PointCloudMaterial shader:
<script type="x-shader/x-vertex" id="particle_vs">
uniform float size;
uniform float scale;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( scale / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
</script>
I have extracted a simple example of my problem here:
http://dev.cartelle.nl/particle-example/
The red particles are assigned a PointCloudMaterial set to size 1.
The green particles are assigned a ShaderMaterial that has a Vertex Shader that accounts for size attenuation like in PointCloudMaterial. I've set the size to 300 in this case, and as you can see the green particles are still smaller.
My expected result is to have a ShaderMaterial to take the same unit measure as the size property on PointCloudMaterial. I have to have both these materials working together, so I'm trying to figure out the relationship between these sizes. Must be something I'm missing with the vertex shader?
Thanks!
Johnny
In your ShaderMaterial, you need to set
scale: { type: 'f', value: window.innerHeight / 2 },
This is because of the following line in the WebGLRenderer method refreshUniformsParticle():
uniforms.scale.value = _canvas.height / 2.0; // TODO: Cache this.
three.js r.71

How to get fullscreen texture coordinates for a fullscreen texture from a previous rendering pass?

I do two rendering passes in webgl application using three.js (contrived example here):
renderer.render(depthScene, camera, depthTarget);
renderer.render(scene, camera);
The first rendering pass is to the render target depthTarget which I want to access in the second rendering pass as a texture uniform:
uniform sampler2D tDepth;
float unpack_depth( const in vec4 rgba_depth ) { ... }
void main() {
vec2 screenTexCoord = vec2( 1.0, 1.0 );
float depth = 1.0 - unpack_depth( texture2D( tDepth, screenTexCoord ) );
gl_FragColor = vec4( vec3( depth ), 1.0 );
}
My question is how do I get the value for screenTexCoord? It is not gl_FragCoord.xy.
To avoid a possible misunderstanding: I don't want to render the texture from the first pass to a quad. I want to use the texture from the first pass while rendering the geometry in the second pass.
EDIT:
According to the WebGL specification gl_FragCoord contains window coordinates which are normalized device coordinates (ndc) scaled by the viewport. The ndc are within [-1, 1] so the following should yield coordinates within [0, 1] for texture lookup:
vec2 ndcXY = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 screenTexCoord = (ndcXY+1.0)/2.0;
But somewhere I must be wrong because the updated example does still not show the (packed) depth?!
Finally figured it out myself. The correct way to calculate the texture coordinates is just:
vec2 screenTexCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
See a working example here.

Fragment shader mixing two cube textures based on alpha not working

I have a problem completely alluding me. I am trying to mix two cube textures together. But for some reason I cannot seem to mix one texture with the other based on the alpha. The function looks like this:
vec4 baseColor = textureCube( tCube, vec3( tFlip * vWorldPosition.x, vWorldPosition.yz ) );
vec4 atmosphereColor = textureCube( tAtmosphere, vec3( tFlip * vWorldPosition.x, vWorldPosition.yz ) );
vec4 texelColor = mix( atmosphereColor, baseColor, atmosphereColor.a );
gl_FragColor = texelColor;
All the materials concerned have depthWrite: true, depthTest: true and transparent: true. One is a texture with 6 images and the other is a WebGLRenderTarget. The WebGLRenderTarget is created with an alpha channel.
format: THREE.RGBAFormat
Individually they both work as expected. For example if I use gl_FragColor = atmosphereColor;, the alpha is visible. But as soon as I try to mix one with the other, no alpha is applied. So for example when I do this:
gl_FragColor = vec4( baseColor.r, baseColor.g, baseColor.b, atmosphereColor.a );
It still will not use the alpha variable provided (it just sets it to 1.0) - even though I know the value is present and varying as I just set gl_FragColor to atmosphereColor and could see it varying D:>
Any ideas? :(
I was able to find the source of the problem. It was related to to the clearAlpha parameter of the renderer. This needed to be set to 0 instead of 1 for my case.

Resources