Changing fresnel falloff on ThreeJS shader - three.js

I've been making use of this shader inside of my ThreeJS project, except I've more or less copied the code verbatim because I have no idea how to write a shader function. Basically I want to edit the rate of falloff on the fresnel effect so that it's only really the edges that are using the colour with a slight glow coming inside
var material = THREE.extendMaterial(THREE.MeshStandardMaterial, {
// Will be prepended to vertex and fragment code
header: 'varying vec3 vNN; varying vec3 vEye;',
fragmentHeader: 'uniform vec3 fresnelColor;',
// Insert code lines by hinting at a existing
vertex: {
// Inserts the line after #include <fog_vertex>
'#include <fog_vertex>': `
mat4 LM = modelMatrix;
LM[2][3] = 0.0;
LM[3][0] = 0.0;
LM[3][1] = 0.0;
LM[3][2] = 0.0;
vec4 GN = LM * vec4(objectNormal.xyz, 1.0);
vNN = normalize(GN.xyz);
vEye = normalize(GN.xyz-cameraPosition);`
},
fragment: {
'gl_FragColor = vec4( outgoingLight, diffuseColor.a );' :
`gl_FragColor.rgb += ( 1.0 - -min(dot(vEye, normalize(vNN) ), 0.0) ) * fresnelColor;`
},
// Uniforms (will be applied to existing or added)
uniforms: {
diffuse: new THREE.Color( 'black' ),
fresnelColor: new THREE.Color( 'blue' )
}
});
I've tried changing the number in this line gl_FragColor.rgb += ( **1.0** - -min(dot(vEye, normalize(vNN) ), 0.0) ) * fresnelColor; and whilst that did stop the gradient of the fresnel, it was a hard stop, as though it was limiting levels instead of the rate of gradient.
I just need help with how I can make the fall off not as far into my models so that it's only really the edges that have it

Maybe this will help:
fragment: {
'gl_FragColor = vec4( outgoingLight, diffuseColor.a );' : `
float m = ( 1.0 - -min(dot(vEye, normalize(vNN)), 0.0) );
m = pow(m, 8.); // the greater the second parameter, the thinner effect you get
gl_FragColor.rgb += m * fresnelColor;
`

Related

How to get direction towards camera from vertex? (in a vertex shader, glsl)

I have this code:
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec3 look = normalize( vec3(cameraPosition) - vec3(worldPosition) );
vec3 transformed = vec3( position ) + look;
But for some reason, it just moves the vertex 1 unit towards the origin point in the scene (0,0,0).
I need it to move the vertex towards the camera(where you are viewing the scene from).
I can't seem to find clear information anywhere on how to accomplish this.
It was a three.js issue.. Had to set the isShaderMaterial = true, in order to get the cameraPosition to update. o_o
material.isShaderMaterial = true; //We need to set this so that the cameraPosition uniform is updated in the shader
material.onBeforeCompile = function ( shader ) {
shader.vertexShader = shader.vertexShader.replace(
'#include <begin_vertex>',
[
'float myOffset = 0.0;',
'myOffset = (vColor.r + vColor.g + vColor.b) < 3.0 ? 0.01 : 0.0;',
'vec4 localPosition = vec4( position, 1.);',
'vec4 worldPosition = modelMatrix * localPosition;',
'vec3 look = myOffset * normalize( cameraPosition - vec3(worldPosition) );',
'vec3 transformed = vec3( position ) + look;'
].join( '\n' )
);
material.userData.shader = shader;
};
if you have a view matrix, transform the vertex position to view coordinate and then you can do transformation according to the camera axis.

ThreeJS Shader Dynamic Texture

I have this shader code below. I want to add a new uniform for another texture and make it that it would be applied to the vertices that is divisible by 4.
uniform vec3 color;
uniform sampler2D texture;
varying vec4 vColor;
void main() {
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor * vec4( color * vColor.xyz, 0.5 );
float depth = gl_FragCoord.z / gl_FragCoord.w;
const vec3 fogColor = vec3( 0.0 );
float fogFactor = smoothstep( 200.0, 600.0, depth );
gl_FragColor = mix( gl_FragColor, vec4( fogColor, gl_FragColor.w ), fogFactor );
}
I want to add a condition something like index % 4 === 0 ? firstTexture : secondTexture but I do not know how to get the vertex index and perform a modulo operator in the shader language.
WebGL GLSL does not provide a vertex index, so you'll have to provide that data manually. For more information, see this question.
The modulus operator in GLSL is a function called mod().

Convert ndc coordinates to world coordinates in fragment shader threejs

My goal is to draw a circle around my mouse cursor over a plane.
I get NDC coordinates (-1 to +1) that represent my cursor position:
const rect = targetHTML.getBoundingClientRect();
const mousePositionX = event.clientX - rect.left;
const mousePositionY = event.clientY - rect.top;
this._currentPoint = {
x: (mousePositionX / targetHTML.clientWidth * 2 - 1),
y: (mousePositionY / targetHTML.clientHeight * -2 + 1),
};
I pass it to my fragment shader via uniforms:
this._cursorMaterial.uniforms.uBrushPosition.value =
new window.THREE.Vector2(this._currentPoint.x, this._currentPoint.y);
In my fragment shader, I want to convert it to a world coordinate in order to compare it to the fragment world location.
// vertex shader
varying vec4 vPos;
void main() {
vPos = modelMatrix * vec4(position, 1.0 );
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0 );
}
// fragment shader
varying vec4 vPos;
uniform vec2 uBrushPosition;
void main() {
// convert uBrush position to world space
// uBrushPosition
vec3 brushWorldPosition = ?
//
if (distance(brushWorldPosition, vpos) < 10.) {
gl_FragColor = vec4(1., 0., 0., .5);
}
discard;
Not in the shader, but you can send it in as a uniform.
var mouseWorld = new THREE.Vector3( mouse.x, mouse.y, distanceFromCamera )
mouseWorld.unproject( camera )

Weird behavior if DataTextures are not square (1:1)

I have a pair of shader programs where everything works great if my DataTextures are square (1:1), but if one or both are 2:1 (width:height) ratio the behavior gets messed up. I can extend each of the buffers with unused filler to make sure they are always square, but this seems unnecessarily costly (memory-wise) in the long run, as one of the two buffer sizes is quite large to start. Is there a way to handle a 2:1 buffer in this scenario?
I have a pair of shader programs:
The first is a single frag shader used to calculate the physics for my program (it writes out a texture tPositions to be read by the second set of shaders). It is driven by Three.js's GPUComputeRenderer script (resolution set at the size of my largest buffer.)
The second pair of shaders (vert and frag) use the data texture tPositions produced by the first shader program to then render out the visualization (resolution set at the window size).
The visualization is a grid of variously shaped particle clouds. In the shader programs, there are textures of two different sizes: The smaller sized textures contain information for each of the particle clouds (one texel per cloud), larger sized textures contain information for each particle in all of the clouds (one texel per particle). Both have a certain amount of unused filler tacked on the end to fill them out to a power of 2.
Texel-per-particle sized textures (large): tPositions, tOffsets
Texel-per-cloud sized textures (small): tGridPositionsAndSeeds, tSelectionFactors
As I said before, the problem is that when these two buffer sizes (the large and the small) are at a 1:1 (width: height) ratio, the programs work just fine; however, when one or both are at a 2:1 (width:height) ratio the behavior is a mess. What accounts for this, and how can I address it? Thanks in advance!
UPDATE: Could the problem be related to my housing the texel coords to read the tPosition texture in the shader's position attribute in the second shader program? If so, perhaps this Github issue regarding texel coords in the position attribute may be related, though I can't find a corresponding question/answer here on SO.
UPDATE 2:
I'm also looking into whether this could be an unpack alignment issue. Thoughts?
Here's the set up in Three.js for the first shader program:
function initComputeRenderer() {
textureData = MotifGrid.getBufferData();
gpuCompute = new GPUComputationRenderer( textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, renderer );
dtPositions = gpuCompute.createTexture();
dtPositions.image.data = textureData.tPositions;
offsetsTexture = new THREE.DataTexture( textureData.tOffsets, textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.needsUpdate = true;
gridPositionsAndSeedsTexture = new THREE.DataTexture( textureData.tGridPositionsAndSeeds, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
gridPositionsAndSeedsTexture.needsUpdate = true;
selectionFactorsTexture = new THREE.DataTexture( textureData.tSelectionFactors, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
positionVariable = gpuCompute.addVariable( "tPositions", document.getElementById( 'position_fragment_shader' ).textContent, dtPositions );
positionVariable.wrapS = THREE.RepeatWrapping; // repeat wrapping for use only with bit powers: 8x8, 16x16, etc.
positionVariable.wrapT = THREE.RepeatWrapping;
gpuCompute.setVariableDependencies( positionVariable, [ positionVariable ] );
positionUniforms = positionVariable.material.uniforms;
positionUniforms.tOffsets = { type: "t", value: offsetsTexture };
positionUniforms.tGridPositionsAndSeeds = { type: "t", value: gridPositionsAndSeedsTexture };
positionUniforms.tSelectionFactors = { type: "t", value: selectionFactorsTexture };
positionUniforms.uPerMotifBufferWidth = { type : "f", value : textureData.uPerMotifBufferWidth };
positionUniforms.uPerMotifBufferHeight = { type : "f", value : textureData.uPerMotifBufferHeight };
positionUniforms.uTime = { type: "f", value: 0.0 };
positionUniforms.uXOffW = { type: "f", value: 0.5 };
}
Here is the first shader program (only a frag for physics calculations):
// tPositions is handled by the GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform float uTime;
uniform float uXOffW;
[...skipping a noise function for brevity...]
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) {
float damping = 0.98;
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferWidth)*texelSizeX, floor(cellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) {
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1
//nowPos = vec4( -(uTime) + gridPosition.x + offsets.x, gridPosition.y + offsets.y, 0.0, 0.0 );
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
[...skipping the physics for brevity...]
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
Here is the setup for the second shader program:
Note: The renderer for this section is a WebGLRenderer at window size
function makePerParticleReferencePositions() {
var positions = new Float32Array( perParticleBufferSize * 3 );
var texelSizeX = 1 / perParticleBufferDimensions.width;
var texelSizeY = 1 / perParticleBufferDimensions.height;
for ( var j = 0, j3 = 0; j < perParticleBufferSize; j ++, j3 += 3 ) {
positions[ j3 + 0 ] = ( ( j % perParticleBufferDimensions.width ) / perParticleBufferDimensions.width ) + ( 0.5 * texelSizeX );
positions[ j3 + 1 ] = ( Math.floor( j / perParticleBufferDimensions.height ) / perParticleBufferDimensions.height ) + ( 0.5 * texelSizeY );
positions[ j3 + 2 ] = j * 0.0001; // this is the real z value for the particle display
}
return positions;
}
var positions = makePerParticleReferencePositions();
...
// Add attributes to the BufferGeometry:
gridOfMotifs.geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
gridOfMotifs.geometry.addAttribute( 'aTextureIndex', new THREE.BufferAttribute( motifGridAttributes.aTextureIndex, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aAlpha', new THREE.BufferAttribute( motifGridAttributes.aAlpha, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aCellIndex', new THREE.BufferAttribute(
motifGridAttributes.aCellIndex, 1 ) );
uniformValues = {};
uniformValues.tSelectionFactors = motifGridAttributes.tSelectionFactors;
uniformValues.uPerMotifBufferWidth = motifGridAttributes.uPerMotifBufferWidth;
uniformValues.uPerMotifBufferHeight = motifGridAttributes.uPerMotifBufferHeight;
gridOfMotifs.geometry.computeBoundingSphere();
...
function makeCustomUniforms( uniformValues ) {
selectionFactorsTexture = new THREE.DataTexture( uniformValues.tSelectionFactors, uniformValues.uPerMotifBufferWidth, uniformValues.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
var customUniforms = {
tPositions : { type : "t", value : null },
tSelectionFactors : { type : "t", value : selectionFactorsTexture },
uPerMotifBufferWidth : { type : "f", value : uniformValues.uPerMotifBufferWidth },
uPerMotifBufferHeight : { type : "f", value : uniformValues.uPerMotifBufferHeight },
uTextureSheet : { type : "t", value : texture }, // this is a sprite sheet of all 10 strokes
uPointSize : { type : "f", value : 18.0 }, // the radius of a point in WebGL units, e.g. 30.0
// Coords for the hatch textures:
uTextureCoordSizeX : { type : "f", value : 1.0 / numTexturesInSheet },
uTextureCoordSizeY : { type : "f", value : 1.0 }, // the size of a texture in the texture map ( they're square, thus only one value )
};
return customUniforms;
}
And here is the corresponding shader program (vert & frag):
Vertex shader:
uniform sampler2D tPositions;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform sampler2D uTextureSheet;
uniform float uPointSize; // the radius size of the point in WebGL units, e.g. "30.0"
uniform float uTextureCoordSizeX; // vertical dimension of each texture given the full side = 1
uniform float uTextureCoordSizeY; // horizontal dimension of each texture given the full side = 1
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
[...skipping noise function for brevity...]
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (aCellIndex >= 0.0) { // buffer filler cell indexes are -1
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX, floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vDensity = vel.x + vel.y;
vAlpha *= abs( vDensity * 3.0 );
vColor = vec3( 1.0, aColor, 1.0 ); // set RGB color associated to vertex; use later in fragment shader.
gl_PointSize = uPointSize;
} else { // if this is a filler cell index (-1)
vAlpha = 0.0;
vDensity = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 ); // position holds the real z value. The z value of "color" is a component of velocity
}
Fragment shader:
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // only render out the texture if this point is not a buffer filler
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Expected Behavior: I can achieve this by forcing all the DataTextures to be 1:1
Weird Behavior: When the smaller DataTextures are 2:1 those perfectly horizontal clouds in the top right of the picture below form and have messed up physics. When the larger DataTextures are 2:1, the grid is skewed, and the clouds appear to be missing parts (as seen below). When both the small and large textures are 2:1, both odd behaviors happen (this is the case in the image below).
Thanks to an answer to my related question here, I now know what was going wrong. The problem was in the way I was using the arrays of indexes (1,2,3,4,5...) to access the DataTextures' texels in the shader.
In this function (and the one for the larger DataTextures)...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
...I assumed that in order to create the y value for my custom uv, perMotifUV, I would need to divide the aCellIndex by the height of the buffer, uPerMotifBufferHeight (it's "vertical" dimension). However, as explained in the SO Q&A here the indices should, of course, be divided by the buffer's width, which would then tell you how many rows down you are!
Thus, the function should be revised to...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferWidth)*texelSizeY ); **Note the change to uPerMotifBufferWidth here
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
The reason my program worked on square DataTextures (1:1) is that in such cases the height and width were equal, so my function was effectively dividing by width in the incorrect line because height=width!

Custom Phong Shader for THREE.JS Object

Goal: Calculate normals in the vertex shader for displaced vertices.
Current State: Some hacky code that I don't believe is 100% correct.
--- progress ---
vert is the modified position of the vertex
vertNormal is the modified position of the vertex applied to the normals ( basically a clone )
vec3 objectNormal = normalize(cross(vert-position,vertNormal-position));
vec3 transformedNormal = normalMatrix * objectNormal;
vNormal = normalize( transformedNormal );
http://fallingcode.com/servedFiles/normals.jpg
I just need some feedback about that part of the vertex shader code at this point.
After #WestLangley's help, I've reached my goal. The waves in the image are just to show the result. I'll have to research equations to make them more natural looking.
So, the normals are being calculated correctly and the environment reflection (a THREE.JS cubemap) is working correctly too.
http://www.fallingcode.com/servedFiles/calculatedNormals.jpg
The following code in the vertex shader is what calculates the normals after vertices have been moved along the normal (the z axis in this case).
// the displacement function
float displace( vec3 pos ) {
float amplitude;
amplitude = sin( pos.y + time ) * 0.1;
return amplitude;
}
float df = displace( position );
vec3 displacedPosition = position + normalize( normal ) * df;
float delta = 0.01;
vec3 newNormal = vec3( df - displace( position + vec3( delta, 0, 0 ) ), df - displace( position + vec3( 0, delta, 0 ) ), delta );
newNormal = normalize( newNormal );
vNormal = normalize( normalMatrix * newNormal );

Resources