Custom Phong Shader for THREE.JS Object - three.js

Goal: Calculate normals in the vertex shader for displaced vertices.
Current State: Some hacky code that I don't believe is 100% correct.
--- progress ---
vert is the modified position of the vertex
vertNormal is the modified position of the vertex applied to the normals ( basically a clone )
vec3 objectNormal = normalize(cross(vert-position,vertNormal-position));
vec3 transformedNormal = normalMatrix * objectNormal;
vNormal = normalize( transformedNormal );
http://fallingcode.com/servedFiles/normals.jpg
I just need some feedback about that part of the vertex shader code at this point.

After #WestLangley's help, I've reached my goal. The waves in the image are just to show the result. I'll have to research equations to make them more natural looking.
So, the normals are being calculated correctly and the environment reflection (a THREE.JS cubemap) is working correctly too.
http://www.fallingcode.com/servedFiles/calculatedNormals.jpg
The following code in the vertex shader is what calculates the normals after vertices have been moved along the normal (the z axis in this case).
// the displacement function
float displace( vec3 pos ) {
float amplitude;
amplitude = sin( pos.y + time ) * 0.1;
return amplitude;
}
float df = displace( position );
vec3 displacedPosition = position + normalize( normal ) * df;
float delta = 0.01;
vec3 newNormal = vec3( df - displace( position + vec3( delta, 0, 0 ) ), df - displace( position + vec3( 0, delta, 0 ) ), delta );
newNormal = normalize( newNormal );
vNormal = normalize( normalMatrix * newNormal );

Related

How to get direction towards camera from vertex? (in a vertex shader, glsl)

I have this code:
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec3 look = normalize( vec3(cameraPosition) - vec3(worldPosition) );
vec3 transformed = vec3( position ) + look;
But for some reason, it just moves the vertex 1 unit towards the origin point in the scene (0,0,0).
I need it to move the vertex towards the camera(where you are viewing the scene from).
I can't seem to find clear information anywhere on how to accomplish this.
It was a three.js issue.. Had to set the isShaderMaterial = true, in order to get the cameraPosition to update. o_o
material.isShaderMaterial = true; //We need to set this so that the cameraPosition uniform is updated in the shader
material.onBeforeCompile = function ( shader ) {
shader.vertexShader = shader.vertexShader.replace(
'#include <begin_vertex>',
[
'float myOffset = 0.0;',
'myOffset = (vColor.r + vColor.g + vColor.b) < 3.0 ? 0.01 : 0.0;',
'vec4 localPosition = vec4( position, 1.);',
'vec4 worldPosition = modelMatrix * localPosition;',
'vec3 look = myOffset * normalize( cameraPosition - vec3(worldPosition) );',
'vec3 transformed = vec3( position ) + look;'
].join( '\n' )
);
material.userData.shader = shader;
};
if you have a view matrix, transform the vertex position to view coordinate and then you can do transformation according to the camera axis.

Implement antialiasing logic for line segments and triangles in GLSL shaders

I'm building 2D Graph structure based on Three.js, all elements of the graph (nodes, edges, triangles for arrows) calculated in shaders. I was able to reach a good level of antialiasing for nodes (circles) but stuck with same task for lines and triangles.
I was able to reach a good antialiasing results for nodes (circles) with and without stroke following this question: How can I add a uniform width outline to WebGL shader drawn circles/ellipses (drawn using edge/distance antialiasing) , my code, responsible for antialiasing alpha:
`float strokeWidth = 0.09;
float outerEdgeCenter = 0.5 - strokeWidth;
float d = distance(vUV, vec2(.5, .5));
float delta = fwidth(d);
float alpha = 1.0 - smoothstep(0.45 - delta, 0.45, d);
float stroke = 1.0 - smoothstep(outerEdgeCenter - delta,
outerEdgeCenter + delta, d);`
But now I'm completely stack with edges and triangles to do same stuff.
Here is an example of shapes images that I have now (on non retina displays):
To reduce under-sampling artifacts I want to do similar algorithms (as for circles) directly in shaders by manipulating alpha and already find some materials related to this topic:
https://thebookofshaders.com/glossary/?search=smoothstep - seems to be the closest solution but unfortunately I wasn't able to implement it properly and figure out how to set up y equation for segmented lines.
https://discourse.threejs.org/t/shader-to-create-an-offset-inward-growing-stroke/6060/12 - last answer, looks promising but not give me proper result.
https://www.shadertoy.com/view/4dcfW8 - also do not give proper result.
Here is an examples of my shaders for lines and triangles:
Line VertexShader (is a slightly adapted version of WestLangley's LineMaterial shader):
`precision highp float;
#include <common>
#include <color_pars_vertex>
#include <fog_pars_vertex>
#include <logdepthbuf_pars_vertex>
#include <clipping_planes_pars_vertex>
uniform float linewidth;
uniform vec2 resolution;
attribute vec3 instanceStart;
attribute vec3 instanceEnd;
attribute vec3 instanceColorStart;
attribute vec3 instanceColorEnd;
attribute float alphaStart;
attribute float alphaEnd;
attribute float widthStart;
attribute float widthEnd;
varying vec2 vUv;
varying float alphaTest;
void trimSegment( const in vec4 start, inout vec4 end ) {
// trim end segment so it terminates between the camera plane and the near plane
// conservative estimate of the near plane
float a = projectionMatrix[ 2 ][ 2 ]; // 3nd entry in 3th column
float b = projectionMatrix[ 3 ][ 2 ]; // 3nd entry in 4th column
float nearEstimate = - 0.5 * b / a;
float alpha = ( nearEstimate - start.z ) / ( end.z - start.z );
end.xyz = mix( start.xyz, end.xyz, alpha );
}
void main() {
#ifdef USE_COLOR
vColor.xyz = ( position.y < 0.5 ) ? instanceColorStart : instanceColorEnd;
alphaTest = ( position.y < 0.5 ) ? alphaStart : alphaEnd;
#endif
float aspect = resolution.x / resolution.y;
vUv = uv;
// camera space
vec4 start = modelViewMatrix * vec4( instanceStart, 1.0 );
vec4 end = modelViewMatrix * vec4( instanceEnd, 1.0 );
// special case for perspective projection, and segments that terminate either in, or behind, the camera plane
// clearly the gpu firmware has a way of addressing this issue when projecting into ndc space
// but we need to perform ndc-space calculations in the shader, so we must address this issue directly
// perhaps there is a more elegant solution -- WestLangley
bool perspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 ); // 4th entry in the 3rd column
if (perspective) {
if (start.z < 0.0 && end.z >= 0.0) {
trimSegment( start, end );
} else if (end.z < 0.0 && start.z >= 0.0) {
trimSegment( end, start );
}
}
// clip space
vec4 clipStart = projectionMatrix * start;
vec4 clipEnd = projectionMatrix * end;
// ndc space
vec2 ndcStart = clipStart.xy / clipStart.w;
vec2 ndcEnd = clipEnd.xy / clipEnd.w;
// direction
vec2 dir = ndcEnd - ndcStart;
// account for clip-space aspect ratio
dir.x *= aspect;
dir = normalize( dir );
// perpendicular to dir
vec2 offset = vec2( dir.y, - dir.x );
// undo aspect ratio adjustment
dir.x /= aspect;
offset.x /= aspect;
// sign flip
if ( position.x < 0.0 ) offset *= - 1.0;
// endcaps, to round line corners
if ( position.y < 0.0 ) {
// offset += - dir;
} else if ( position.y > 1.0 ) {
// offset += dir;
}
// adjust for linewidth
offset *= (linewidth * widthStart);
// adjust for clip-space to screen-space conversion // maybe resolution should be based on viewport ...
offset /= resolution.y;
// select end
vec4 clip = ( position.y < 0.5 ) ? clipStart : clipEnd;
// back to clip space
offset *= clip.w;
clip.xy += offset;
gl_Position = clip;
vec4 mvPosition = ( position.y < 0.5 ) ? start : end; // this is an approximation
#include <logdepthbuf_vertex>
#include <clipping_planes_vertex>
#include <fog_vertex>
}`
Line FragmentShader:
`precision highp float;
#include <common>
#include <color_pars_fragment>
#include <fog_pars_fragment>
#include <logdepthbuf_pars_fragment>
#include <clipping_planes_pars_fragment>
uniform vec3 diffuse;
uniform float opacity;
varying vec2 vUv;
varying float alphaTest;
void main() {
if ( abs( vUv.y ) > 1.0 ) {
float a = vUv.x;
float b = ( vUv.y > 0.0 ) ? vUv.y - 1.0 : vUv.y + 1.0;
float len2 = a * a + b * b;
if ( len2 > 1.0 ) discard;
}
vec4 diffuseColor = vec4( diffuse, alphaTest );
#include <logdepthbuf_fragment>
#include <color_fragment>
gl_FragColor = vec4( diffuseColor.rgb, diffuseColor.a );
#include <premultiplied_alpha_fragment>
#include <tonemapping_fragment>
#include <encodings_fragment>
#include <fog_fragment>
}`
Triangle vertex shader:
`precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float zoomLevel;
attribute vec3 position;
attribute vec3 vertexPos;
attribute vec3 color;
attribute float alpha;
attribute float xAngle;
attribute float yAngle;
attribute float xScale;
attribute float yScale;
varying vec4 vColor;
// transforms the 'positions' geometry with instance attributes
vec3 transform( inout vec3 position, vec3 T) {
position.x *= xScale;
position.y *= yScale;
// Rotate the position
vec3 rotatedPosition = vec3(
position.x * yAngle + position.y * xAngle,
position.y * yAngle - position.x * xAngle, 0);
position = rotatedPosition + T;
// return the transformed position
return position;
}
void main() {
vec3 pos = position;
vColor = vec4(color, alpha);
// transform it
transform(pos, vertexPos);
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos, 1.0 );
}`
Triangle FragmentShader:
`precision highp float;
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}`
Will really appreciate any help on how to do it or suggestion of right direction for further investigations. Thank you!

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

Interior Mapping shader self shadowing

I'm tinkering with Joost van Dongen's Interior mapping shader and I'm trying to implement self-shadowing. Still I couldn't quite figure out what coordinates shadow casting light vectors need to be in. You can see somewhat working demo at here I've attached the light position with an offset to the camera position just to see whats happening but obviously it doesn't look right either.
Shader code is below. Look for SHADOWS DEV in fragment shader. Vectors in question are: shad_E and shad_I.
vertex shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 lightPosition;
void main() {
// inverse veiw matrix
mat4 modelViewMatrixInverse = InverseMatrix( modelViewMatrix );
// surface position in object space
oP = position;
// position of the eye in object space
oE = modelViewMatrixInverse[3].xyz;
// incident ray direction in object space
oI = oP - oE;
// link the light position to camera for testing
// need to find a way for world space directional light to work
shad_E = oE - lightPosition;
// light vector
shad_I = oP - shad_E;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
fragment shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 wallFreq;
uniform float wallsBias;
uniform vec3 wallCeilingColor;
uniform vec3 wallFloorColor;
uniform vec3 wallXYColor;
uniform vec3 wallZYColor;
float checker(vec2 uv, float checkSize) {
float fmodResult = mod( floor(checkSize * uv.x) + floor(checkSize * uv.y), 2.0);
if (fmodResult < 1.0) {
return 1.0;
} else {
return 0.85;
}
}
void main() {
// INTERIOR MAPPING by Joost van Dongen
// http://interiormapping.oogst3d.net/
// email: joost#ronimo-games.com
// Twitter: #JoostDevBlog
vec3 wallFrequencies = wallFreq / 2.0 - wallsBias;
//calculate wall locations
vec3 walls = ( floor( oP * wallFrequencies) + step( vec3( 0.0 ), oI )) / wallFrequencies;
//how much of the ray is needed to get from the oE to each of the walls
vec3 rayFractions = ( walls - oE) / oI;
//texture-coordinates of intersections
vec2 intersectionXY = (oE + rayFractions.z * oI).xy;
vec2 intersectionXZ = (oE + rayFractions.y * oI).xz;
vec2 intersectionZY = (oE + rayFractions.x * oI).zy;
//use the intersection as the texture coordinates for the ceiling
vec3 ceilingColour = wallCeilingColor * checker( intersectionXZ, 2.0 );
vec3 floorColour = wallFloorColor * checker( intersectionXZ, 2.0 );
vec3 verticalColour = mix(floorColour, ceilingColour, step(0.0, oI.y));
vec3 wallXYColour = wallXYColor * checker( intersectionXY, 2.0 );
vec3 wallZYColour = wallZYColor * checker( intersectionZY, 2.0 );
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
vec3 shad_P = oP; // just surface position in object space
vec3 shad_walls = ( floor( shad_P * wallFrequencies) + step( vec3( 0.0 ), shad_I )) / wallFrequencies;
vec3 shad_rayFr = ( shad_walls - shad_E ) / shad_I;
// Cast shadow from ceiling planes (intersectionXZ)
wallZYColour *= mix( 0.3, 1.0, step( shad_rayFr.x, shad_rayFr.y ));
verticalColour *= mix( 0.3, 1.0, step( rayFractions.y, shad_rayFr.y ));
wallXYColour *= mix( 0.3, 1.0, step( shad_rayFr.z, shad_rayFr.y ));
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
// intersect walls
float xVSz = step(rayFractions.x, rayFractions.z);
vec3 interiorColour = mix(wallXYColour, wallZYColour, xVSz);
float rayFraction_xVSz = mix(rayFractions.z, rayFractions.x, xVSz);
float xzVSy = step(rayFraction_xVSz, rayFractions.y);
interiorColour = mix(verticalColour, interiorColour, xzVSy);
gl_FragColor.xyz = interiorColour;
}
Based on my very limited understanding of what you're trying to implement, it seems you would need to take the location of the intersection between the eye vector and the interior plane it hits, then trace it back to the light.
To trace back to the light, you would first have to check if the interior plane intersected by the eye vector is back-facing from the light's perspective, which would make it in shadow. If it's front-facing then you have to ray cast from within the room to the light and check for an intersection with any of the other interior planes.

moving from one point to point on sphere

I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.

Resources