Use 2 meshes + shader materials with each a different fragment shader in 1 scene (three.js) - three.js

I have 2 meshes with each a shaderMaterial and each a different fragment shader. When I add both meshes to my scene, only one will show up. Below you can find my 2 fragment shaders (see both images to see what they look like). They're basically the same.
What I want to achieve: Use mesh1 as a mask and put the other one, mesh2 (purple blob) on top of the mask.
Purple blob:
// three.js code
const geometry1 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material1 = new THREE.ShaderMaterial({
uniforms: this.uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh1 = new THREE.Mesh(geometry1, material1);
this.scene.add(mesh1);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Blue mask
// three.js code
const geometry2 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material2 = new THREE.ShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh2 = new THREE.Mesh(geometry2, material2);
this.scene.add(mesh2);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Render Target code
this.rtWidth = window.innerWidth;
this.rtHeight = window.innerHeight;
this.renderTarget = new THREE.WebGLRenderTarget(this.rtWidth, this.rtHeight);
this.rtCamera = new THREE.PerspectiveCamera(
this.camera.settings.fov,
this.camera.settings.aspect,
this.camera.settings.near,
this.camera.settings.far
);
this.rtCamera.position.set(0, 0, this.camera.settings.perspective);
this.rtScene = new THREE.Scene();
this.rtScene.add(this.purpleBlob);
const geometry = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight, 1);
const material = new THREE.MeshPhongMaterial({
map: this.renderTarget.texture,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
I'm still new to shaders so please be patient. :-)

There are probably infinite ways to mask in three.js. Here's a few
Use the stencil buffer
The stencil buffer is similar to the depth buffer in that it for every pixel in the canvas or render target there is a corresponding stencil pixel. You need to tell three.js you want a stencil buffer and then you can tell it when rendering what to do with the stencil buffer when you're drawing things.
You the stencil settings on Material
You tell three.js
what to do if the pixel you're drawing fails the stencil test
what to do if the pixel your drawing fails the depth test
what to do if the pixel you're drawing passes the depth test.
The things you can tell it to do for each of those conditions are keep (do nothing), increment, decrement, increment wraparound, decrement wraparound, set to a specific value.
You can also specify what the stencil test is by setting Material.stencilFunc
So, for example you can clear the stencil buffer to 0 (the default?), set the stencil test so it always passes, and set the conditions so if the depth test passes you set the stencil to 1. You then draw a bunch of things. Everywhere they are drawn there will now be a 1 in then stencil buffer.
Now you change the stencil test so it only passes if it equals 1 (or 0) and then draw more stuff, now things will only be drawn where the stencil equals the value you set
This exmaple uses the stencil
Mask with an alpha mask
In this case you need 2 color textures and an alpha texture. How you get those is up to you. For example you could load all 3 from images. Or you could generate all 3 using 3 render targets. Finally you pass all 3 to a shader that mixes them as in
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, valueFromAlphaTexture);
This example uses this alpha mixing method
Note that if one of your 2 colors textures has an alpha channel you could use just 2 textures. You'd just pass one of the color textures as your mask.
Or of course you could calculate a mask based on the colors in one image or the other or both. For example
// assume you have function that converts from rgb to hue,saturation,value
vec3 hsv = rgb2hsv(colorFromTexture1.rgb);
float hue = hsv.x;
// pick one or the other if color1 is close to green
float mixAmount = step(abs(hue - 0.33), 0.05);
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, mixAmount);
The point here is not that exact code, it's that you can make any formula you want for the mask, based on whatever you want, color, position, random math, sine waves based on time, some formula that generates a blob, whatever. The most common is some code that just looks up a mixAmount from a texture which is what the linked example above does.
ShaderToy style
Your code above appears to be a shadertoy style shader which is drawing a fullscreen quad. Instead of drawing 2 separate things you can just draw them in the same shader
vec4 computeBlueBlob() {
...
return blueBlobColor;
}
vec4 computeWhiteBlob() {
...
return whtieBlobColor;
}
vec4 main() {
vec4 color1 = computeBlueBlob();
vec4 color2 = computeWhiteBlob();
float mixAmount = color.a; // note: color2.a could be any
// formula to decide which colors
// to draw
gl_FragColor = mix(color1, color2, mixAmount);
}
note just like above how you compute mixAmount is up to you. Based it off anything, color1.r, color2.r, some formula, some hue, some other blob generation function, whatever.

Related

Convert ndc coordinates to world coordinates in fragment shader threejs

My goal is to draw a circle around my mouse cursor over a plane.
I get NDC coordinates (-1 to +1) that represent my cursor position:
const rect = targetHTML.getBoundingClientRect();
const mousePositionX = event.clientX - rect.left;
const mousePositionY = event.clientY - rect.top;
this._currentPoint = {
x: (mousePositionX / targetHTML.clientWidth * 2 - 1),
y: (mousePositionY / targetHTML.clientHeight * -2 + 1),
};
I pass it to my fragment shader via uniforms:
this._cursorMaterial.uniforms.uBrushPosition.value =
new window.THREE.Vector2(this._currentPoint.x, this._currentPoint.y);
In my fragment shader, I want to convert it to a world coordinate in order to compare it to the fragment world location.
// vertex shader
varying vec4 vPos;
void main() {
vPos = modelMatrix * vec4(position, 1.0 );
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0 );
}
// fragment shader
varying vec4 vPos;
uniform vec2 uBrushPosition;
void main() {
// convert uBrush position to world space
// uBrushPosition
vec3 brushWorldPosition = ?
//
if (distance(brushWorldPosition, vpos) < 10.) {
gl_FragColor = vec4(1., 0., 0., .5);
}
discard;
Not in the shader, but you can send it in as a uniform.
var mouseWorld = new THREE.Vector3( mouse.x, mouse.y, distanceFromCamera )
mouseWorld.unproject( camera )

Mapping texture to THREE.Points

I'm trying to map one texture (same as on cube's side) to multiple points so that one point is colored with part of a texture and all the points together make up a complete image.
For doing this I have a custom shader that tries to map point position to texture like this:
var uniform = THREE.TextureShader.uniforms();
uniform.texture.value = texture;
uniform.bbMin.value = new THREE.Vector3(-2.5, -2.5, 0); //THREE.Points Box3 min value
uniform.bbMax.value = new THREE.Vector3(2.5, 2.5, 0); //THREE.Points Box3 max value
//Shader
"vec3 p = (position - bbMin) / (bbMax - bbMin);",
/*This should give me fraction between 0 and 1 to match part of texture but it is not*/
"vColor = texture2D(texture, p.xy).rgb;",
Codepen for testing is here.
Any ideas how to calculate it correctly?
Desired result would be something like this, only there would be space between tiles.
50000 points or 50000 planes it's all the same, you need some way to pass in data per point or per plane that lets you compute your texture coordinates. Personally I'd choose planes because you can rotate, scale, and flip planes where's you can't do that with POINTS.
In any case though there's an infinite number of ways to that so it's really up to you to pick one. For points you get 1 "chunk" of data per point where by "chunk" I mean all the data from all the attributes you set up.
So for example you could set up an attribute with an X,Y position representing which piece of that sprite you want to draw. In your example you've divided it 6x6 so make a vec2 attribute with values 0-5, 0-5 selecting the portion of the sprite.
Pass that into the vertex shader then you can either do some math there or pass it into the fragment shader directly. Let's assume you pass it into the fragment shader directly.
gl_PointCoord are the texture coordinates for the POINT that go from 0 to 1 so
varying vec2 segment; // the segment of the sprite 0-5x, 0-5y
vec2 uv = (v_segment + gl_PointCoord) / 6.0;
vec4 color = texture2D(yourTextureUniform, uv);
Seems like it would work.
That one is hardcoded to 6x6. Change it to NxM by passing that in
varying vec2 segment; // the segment of the sprite
uniform vec2 numSegments; // number of segments across and down sprite
vec2 uv = (v_segment + gl_PointCoord) / numSegments
vec4 color = texture2D(yourTextureUniform, uv);
Example:
"use strict";
var gl = twgl.getWebGLContext(document.getElementById("c"));
var programInfo = twgl.createProgramInfo(gl, ["vs", "fs"]);
// make a rainbow circle texture from a 2d canvas as it's easier than downloading
var ctx = document.createElement("canvas").getContext("2d");
ctx.canvas.width = 128;
ctx.canvas.height = 128;
var gradient = ctx.createRadialGradient(64,64,60,64,64,0);
for (var i = 0; i <= 12; ++i) {
gradient.addColorStop(i / 12,"hsl(" + (i / 12 * 360) + ",100%,50%");
}
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, 128, 128);
// make points and segment data
var numSegmentsAcross = 6;
var numSegmentsDown = 5;
var positions = [];
var segments = [];
for (var y = 0; y < numSegmentsDown; ++y) {
for (var x = 0; x < numSegmentsAcross; ++x) {
positions.push(x / (numSegmentsAcross - 1) * 2 - 1, y / (numSegmentsDown - 1) * 2 - 1);
segments.push(x, y);
}
}
var arrays = {
position: { size: 2, data: positions },
segment: { size: 2, data: segments },
};
var bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
var tex = twgl.createTexture(gl, { src: ctx.canvas });
var uniforms = {
u_numSegments: [numSegmentsAcross, numSegmentsDown],
u_texture: tex,
};
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
twgl.drawBufferInfo(gl, gl.POINTS, bufferInfo);
canvas { border: 1px solid black; }
<script id="vs" type="notjs">
attribute vec4 position;
attribute vec2 segment;
varying vec2 v_segment;
void main() {
gl_Position = position;
v_segment = segment;
gl_PointSize = 20.0;
}
</script>
<script id="fs" type="notjs">
precision mediump float;
varying vec2 v_segment;
uniform vec2 u_numSegments;
uniform sampler2D u_texture;
void main() {
vec2 uv = (v_segment + vec2(gl_PointCoord.x, 1.0 - gl_PointCoord.y)) / u_numSegments;
gl_FragColor = texture2D(u_texture, uv);
}
</script>
<script src="https://twgljs.org/dist/twgl.min.js"></script>
<canvas id="c"></canvas>

WebGL heightmap using vertex shader, using 32 bits instead of 8 bits

I'm using the following vertex shader (courtesy http://stemkoski.github.io/Three.js/Shader-Heightmap-Textures.html) to generate terrain from a grayscale height map:
uniform sampler2D bumpTexture;
uniform float bumpScale;
varying float vAmount;
varying vec2 vUV;
void main()
{
vUV = uv;
vec4 bumpData = texture2D( bumpTexture, uv );
vAmount = bumpData.r; // assuming map is grayscale it doesn't matter if you use r, g, or b.
// move the position along the normal
vec3 newPosition = position + normal * bumpScale * vAmount;
gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0);
}
I'd like to have 32-bits of resolution, and have generated a heightmap that encodes heights as RGBA. I have no idea how to go about changing the shader code to accommodate this. Any direction or help?
bumpData.r, .g, .b and .a are all quantities in the range [0.0, 1.0] equivalent to the original byte values divided by 255.0.
So depending on your endianness, a naive conversion back to the original int might be:
(bumpData.r * 255.0) +
(bumpdata.g * 255.0 * 256.0) +
(bumpData.b * 255.0 * 256.0 * 256.0) +
(bumpData.a * 255.0 * 256.0 * 256.0 * 256.0)
So that's the same as a dot product with the vector (255.0, 65280.0, 16711680.0, 4278190080.0), which is likely to be the much more efficient way to implement it.
With threejs
const generateHeightTexture = (width) => {
// let max_texture_width = RENDERER.capabilities.maxTextureSize;
let pixels = new Float32Array(width * width)
pixels.fill(0, 0, pixels.length);
let texture = new THREE.DataTexture(pixels, width, width, THREE.AlphaFormat, THREE.FloatType);
texture.magFilter = THREE.LinearFilter;
texture.minFilter = THREE.NearestFilter;
// texture.anisotropy = RENDERER.capabilities.getMaxAnisotropy();
texture.needsUpdate = true;
console.log('Built Physical Texture:', width, 'x', width)
return texture;
}

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

Three js 2d matrix visualization

I am trying to visualize 2d matrices using Three js. These matrices are the states of the neurons in a neural network. The matrices are not huge (64 x 32) The values in these matrices will change and I want those new values to be displayed in the visualization.
For the 2d matrix I want a plane of neurons.
I have tried creating a particle system using a plane geometry with as many vertices as neurons in the data matrix.
var width = 32;
var height = 64;
var planeGeometry = new THREE.PlaneGeometry( width, height, width - 1 , height - 1 );
var particlePlane = new THREE.ParticleSystem( planeGeometry, shaderMaterial );
In the fragment shader each particle is given a base texture (a white circle)
gl_FragColor = texture2D(baseTexture, gl_PointCoord);
And then I use a second texture containing the data matrix values (greyscale pixel values) to modify each base texture.
// Sets particle texture to desired color
// vertexPosition is a vec2 in coordinates local to the plane
gl_FragColor = gl_FragColor * texture2D( dataTexture, vertexPosition );
To calculate vertexPosition in the vertex share I do the following (irrelevant lines ommitted):
uniform float width;
uniform float height;
varying vec2 vertexPosition;
void main()
{
vertexPosition = vec2( position.x / width, position.y / height );
}
This is where I'm getting caught up. The vertexPosition does not seem to be mapping properly to the dataTexture pixels. I want a one to one correspondence between particles and pixels.
How do I properly map from the location of particles/vertexes on a plane to equivalent pixel locations in a texture?
I am new to three js, so please feel free to tell me my approach is totally off.
To get texture coordinates, there are ready to use projection matrix in glsl, here is what I would use as a vertex shader
varying vec2 vertexPosition;
void main() {
vertexPosition = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Then you have the xy position to use in the fragment in the varying vertexPosition.

Resources