GLSL: How to update the position of each vertex with random values in vertex-shader? - three.js

I am new to vertex-shader and I am using threejs morphTargets and Points material to render my object's mesh and I am using vertex-shader for rendering and animating the mesh.
At every vertex I have placed a sphere image(say molecules) and I want them to vibrate randomly in x and y directions. I am trying to add some random values so that they vibrate at same rate in random directions.
void main() {
//Morph the position based on morphTargets
vec3 morphed = vec3( 0.0 , 0.0 , 0.0 );
morphed += ( morphTarget0 - position ) * morphTargetInfluences[0];
morphed += position;
// // vibrate the molecules based on temperature
float degrees = temperature + 60.0;
float amplitude = degrees + 100.0 / degrees;
float rand1 = (random * rand(position.xy) * amplitude) * 0.00001;
morphed.x = morphed.x + rand1;
morphed.y = morphed.y + rand1;
//morphed.z = morphed.z + rand1;
gl_Position = projectionMatrix * modelViewMatrix * vec4( morphed, 1.0 );
}
The above code is vibrating the molecules in same direction, it looks like entire molecule container is moving not the molecules.
So how can I get the random vibrations for each vertex?

Use a noise function, which should guarantee that the motion is continuous.

Related

How can I move only specific vertices from my vertexshader ? (And how to choose them)

I created a square like this :
THREE.PlaneBufferGeometry(1, 1, 1, 50);
Regarding its material I used a shader material.
THREE.ShaderMaterial()
In my vertexShader function I call a 2d noise function that moves each vertices of my square like this :
But in the end I only want the left side of my square to move. I think if I only call the 50 first vertices or 1 vertices every 2, this should work.
Here's the code of my vertexShader :
void main() {
vUv = uv;
vec3 pos = position.xyz;
pos.x += noiseFunction(vec2(pos.y, time));
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
Does anyone know how can I only select the left-side vertices of my square ? Thanks
The position vector maps the vertex position in local-space, which means that the center of the quad is in the position (0,0).
Therefore, if you want to apply these changes only to the vertices in the left side, you need check if the x coordinate of the vertex is negative x-space.
void main() {
vUv = uv;
vec3 pos = position.xyz;
if ( pos.x < 0.0 ) {
pos.x += noiseFunction(vec2(pos.y, time));
}
// to avoid conditional branching, remove the entire if-block
// and replace it with the line below
// pos.x += noiseFunction(vec2(pos.y, time)) * max(sign(-pos.x), 0.0);
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
I've used an if-statement to make it clear what I meant, but in reality you should avoid it.
That way you prevent conditional branching on GPU.

How to build my funny timeline?

Building my responsive website, I would like to build my funny timeline, but I cannot come up with a solution.
It would be a sprite such as a rocket or flying saucer taking off at the bottom of middle of the page and coming out with smoke.
Smoke would remain more or less and disclose my timeline.
Sketch
Is anyone does have an idea how to make that possible?
To simulate smoke, you have to use a particle system.
As you maybe know, WebGL is able to draw triangles, lines and points.
This last one is what we need. The smoke is made of hundreds of semi-transparent white disks of slighly different sizes. Each point is defined by 7 attributes :
x, y: starting position.
vx, vy: direction.
radius: maximal radius.
life: number of milliseconds before it disappears.
delay: Number of milliseconds to wait before its birth.
One trick is to create points along a vertical centered axis. The more you go up, the more the delay increases. The other trick is to make the point more more transparent as it reaches it end of live.
Here is how you create such vertices :
function createVertices() {
var x, y, vx, vy, radius, life, delay;
var vertices = [];
for( delay=0; delay<1; delay+=0.01 ) {
for( var loops=0; loops<5; loops++ ) {
// Going left.
x = rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = -rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
// Going right.
x = -rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
}
}
var buff = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, buff );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW );
return Math.floor( vertices.length / 7 );
}
As you can see, I created points going right and points going left to get a growing fuzzy triangle.
Then you need a vertex shader controling the position and size of the points.
WebGL provide the output variable gl_PointSize which is the size (in pixels) of the square to draw for the current point.
uniform float uniWidth;
uniform float uniHeight;
uniform float uniTime;
attribute vec2 attCoords;
attribute vec2 attDirection;
attribute float attRadius;
attribute float attLife;
attribute float attDelay;
varying float varAlpha;
const float PERIOD = 10000.0;
const float TRAVEL_TIME = 2000.0;
void main() {
float time = mod( uniTime, PERIOD );
time -= TRAVEL_TIME * attDelay;
if( time < 0.0 || time > attLife) return;
vec2 pos = attCoords + time * attDirection;
gl_Position = vec4( pos.xy, 0, 1 );
gl_PointSize = time * attRadius * min(uniWidth, uniHeight);
varAlpha = 1.0 - (time / attLife);
}
Finally, the fragment shader will display a point in white. but the more you go far from the center, the more transparent the fragments become.
To know where you are in the square drawn for the current point, you can read the global WebGL variable gl_PointCoord.
precision mediump float;
varying float varAlpha;
void main() {
float x = gl_PointCoord.x - 0.5;
float y = gl_PointCoord.y - 0.5;
float radius = x * x + y * y;
if( radius > 0.25 ) discard;
float alpha = varAlpha * 0.8 * (0.25 - radius);
gl_FragColor = vec4(1, 1, 1, alpha);
}
Here is a live example : https://jsfiddle.net/m1a9qry6/1/

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

moving from one point to point on sphere

I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.

Resources