Particle system design using Three.js and Shader - three.js

I'm very new to this community. As i'm asking question if there is something i claim not right, please correct me.
Now to the point, i'm design a particle system using Three.js library, particularly i'm using THREE.Geometry() and control the vertex using shader. I want my particle movement restricted inside a box, which means when a particle crosses over a face of the box, it new position will be at the opposite side of that face.
Here's how i approach, in the vertex shader:
uniform float elapsedTime;
void main() {
gl_PointSize = 3.2;
vec3 pos = position;
pos.y -= elapsedTime*2.1;
if( pos.y < -100.0) {
pos.y = 100.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0 );
}
The ellapsedTime is sent from javascript animation loop via uniform. And the y position of each vertex will be update corresponding to the time. As a test, i want if a particle is lower than the bottom plane ( y = -100) it will move to the top plane. That was my plan. And this is the result after they all reach the bottom:
Start to fall
After reach the bottom
So, what am i missing here?

You can achieve it, using mod function:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 300);
var renderer = new THREE.WebGLRenderer({
antialis: true
});
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
var gridTop = new THREE.GridHelper(200, 10);
gridTop.position.y = 100;
var gridBottom = new THREE.GridHelper(200, 10);
gridBottom.position.y = -100;
scene.add(gridTop, gridBottom);
var pts = [];
for (let i = 0; i < 1000; i++) {
pts.push(new THREE.Vector3(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5).multiplyScalar(100));
}
var geom = new THREE.BufferGeometry().setFromPoints(pts);
var mat = new THREE.PointsMaterial({
size: 2,
color: "aqua"
});
var uniforms = {
time: {
value: 0
},
highY: {
value: 100
},
lowY: {
value: -100
}
}
mat.onBeforeCompile = shader => {
shader.uniforms.time = uniforms.time;
shader.uniforms.highY = uniforms.highY;
shader.uniforms.lowY = uniforms.lowY;
console.log(shader.vertexShader);
shader.vertexShader = `
uniform float time;
uniform float highY;
uniform float lowY;
` + shader.vertexShader;
shader.vertexShader = shader.vertexShader.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
float totalY = highY - lowY;
transformed.y = highY - mod(highY - (transformed.y - time * 20.), totalY);
`
);
}
var points = new THREE.Points(geom, mat);
scene.add(points);
var clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
uniforms.time.value = clock.getElapsedTime();
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>

You can not change state in a shader. vertex shaders only output is gl_Position (to generate points/lines/triangles) and varyings that get passed to the fragment shader. Fragment shader's only output is gl_FragColor (in general). So trying to change pos.y will do nothing. The moment the shader exits your change is forgotten.
For your particle code though you could make the position a repeating function of the time
const float duration = 5.0;
float t = fract(elapsedTime / duration);
pos.y = mix(-100.0, 100.0, t);
Assuming elapsedTime is in seconds then pos.y will go from -100 to 100 over 5 seconds and repeat.
Note in this case all the particles will fall at the same time. You could add an attribute to give them each a different time offsets or you could work their position into your own formula. Related to that you might find this article useful.
You could also do the particle movement in JavaScript like this example and this one, updating the positions in the Geometry (or better, BufferGeometry)
Yet another solution is to do the movement in a separate shader by storing the positions in a texture and updating them to a new texture. Then using that texture as input another set of shaders that draws particles.

Related

Push particles away from mouseposition in glsl and three js

I have the following setup for my THREE.Points Object:
this.particleGeometry = new THREE.BufferGeometry()
this.particleMaterial = new THREE.ShaderMaterial(
{
vertexShader: vshader,
fragmentShader: fshader,
blending: THREE.AdditiveBlending,
depthWrite: false,
uniforms: {
uTime: new THREE.Uniform(0),
uMousePosition: this.mousePosition
}
}
)
and then some code to place points in the BufferGeometry on a sphere. That is working fine.
I also set up a Raycaster to track the mouse position intersecting a hidden plane and then update the uniform uMousePosition accordingly. That also works fine, I get the mouse position sent to my vertex shader.
Now I am trying to make the particles that are in a certain distance d to the mouse push away from it where the closest ones are pushed most of course, and also apply a gravity back to their original position to restore everything after time.
So here is what I have in my vertex shader:
void main() {
float lerp(float a, float b, float amount) {
return a + (b - a) * amount;
}
void main() {
vec3 p = position;
float dist = min(distance(p, mousePosition), 1.);
float lerpFactor = .2;
p.x = lerp(p.x, position.x * dist, lerpFactor);
p.y = lerp(p.y, position.y * dist, lerpFactor);
p.z = lerp(p.z, position.z * dist, lerpFactor);//Mouse is always in z=0
vec4 mvPosition = modelViewMatrix * vec4(p, 1.);
gl_PointSize = 30. * (1. / -mvPosition.z );
gl_Position = projectionMatrix * mvPosition;
}
}
And here is what it looks like when the mouse is outside the sphere (added a small sphere that moves with the mouseposition to indicate the mouseposition)
And here when the mouse is inside:
Outside already looks kind of correct, but mouse inside only moves the particles closer back to their original position, where it should push them further outside instead. I guess I somehow have to determine the direction of the distance.
Also, the lerp method does not lerp, the particles directly jump to their position.
So I wonder how I get the correct distance to the mouse to always move the particles in a certain area and also how to animate the lerp / gravity effect.
That's how you could do it as a first approximation:
body{
overflow: hidden;
margin: 0;
}
<script type="module">
import * as THREE from "https://cdn.skypack.dev/three#0.136.0";
import {OrbitControls} from "https://cdn.skypack.dev/three#0.136.0/examples/jsm/controls/OrbitControls.js";
import * as BufferGeometryUtils from "https://cdn.skypack.dev/three#0.136.0/examples/jsm/utils/BufferGeometryUtils.js";
let scene = new THREE.Scene();
let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 100);
camera.position.set(0, 0, 10);
let renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
let controls = new OrbitControls(camera, renderer.domElement);
let marker = new THREE.Mesh(new THREE.SphereGeometry(0.5, 16, 8), new THREE.MeshBasicMaterial({color: "red", wireframe: true}));
scene.add(marker);
let g = new THREE.IcosahedronGeometry(4, 20);
g = BufferGeometryUtils.mergeVertices(g);
let uniforms = {
mousePos: {value: new THREE.Vector3()}
}
let m = new THREE.PointsMaterial({
size: 0.1,
onBeforeCompile: shader => {
shader.uniforms.mousePos = uniforms.mousePos;
shader.vertexShader = `
uniform vec3 mousePos;
${shader.vertexShader}
`.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
vec3 seg = position - mousePos;
vec3 dir = normalize(seg);
float dist = length(seg);
if (dist < 2.){
float force = clamp(1. / (dist * dist), 0., 1.);
transformed += dir * force;
}
`
);
console.log(shader.vertexShader);
}
});
let p = new THREE.Points(g, m);
scene.add(p);
let clock = new THREE.Clock();
renderer.setAnimationLoop( _ => {
let t = clock.getElapsedTime();
marker.position.x = Math.sin(t * 0.5) * 5;
marker.position.y = Math.cos(t * 0.3) * 5;
uniforms.mousePos.value.copy(marker.position);
renderer.render(scene, camera);
})
</script>

Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
I've been trying to figure this answer out. My questions are
What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?
Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?
Based on your previous quesiton...
First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.
Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.
Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.
As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.
So again it requires a custom shader to keep the points facing the camera.
To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute
Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute
We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.
At draw time you can think of it like this
for each instance
set size to the next value in the size attribute
set color to the next value in the color attribute
set center to the next value in the center attribute
call the vertex shader 6 times, with position and uv set to the nth value in their attributes.
In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.
If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.
Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.
However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.
I have an article written on this topic.

FBO Particles with Cumulative Movement

Link to thread threejs discourse: https://discourse.threejs.org/t/fbo-particles-with-cumulative-movement/7221
This is difficult for me to explain because of my limited knowledge on the subject, but I'm gonna do my best..
At this point, I have a basic FBO particle system in place that works. The following is how it's set up:
var FBO = function( exports ){
var scene, orthoCamera, rtt;
exports.init = function( width, height, renderer, simulationMaterial, renderMaterial ){
var gl = renderer.getContext();
//1 we need FLOAT Textures to store positions
//https://github.com/KhronosGroup/WebGL/blob/master/sdk/tests/conformance/extensions/oes-texture-float.html
if (!gl.getExtension("OES_texture_float")){
throw new Error( "float textures not supported" );
}
//2 we need to access textures from within the vertex shader
//https://github.com/KhronosGroup/WebGL/blob/90ceaac0c4546b1aad634a6a5c4d2dfae9f4d124/conformance-suites/1.0.0/extra/webgl-info.html
if( gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS) == 0 ) {
throw new Error( "vertex shader cannot read textures" );
}
//3 rtt setup
scene = new THREE.Scene();
orthoCamera = new THREE.OrthographicCamera(-1,1,1,-1,1/Math.pow( 2, 53 ),1 );
//4 create a target texture
var options = {
minFilter: THREE.NearestFilter,//important as we want to sample square pixels
magFilter: THREE.NearestFilter,//
format: THREE.RGBAFormat,//180407 changed to RGBAFormat
type:THREE.FloatType//important as we need precise coordinates (not ints)
};
rtt = new THREE.WebGLRenderTarget( width,height, options);
//5 the simulation:
//create a bi-unit quadrilateral and uses the simulation material to update the Float Texture
var geom = new THREE.BufferGeometry();
geom.addAttribute( 'position', new THREE.BufferAttribute( new Float32Array([ -1,-1,0, 1,-1,0, 1,1,0, -1,-1, 0, 1, 1, 0, -1,1,0 ]), 3 ) );
geom.addAttribute( 'uv', new THREE.BufferAttribute( new Float32Array([ 0,1, 1,1, 1,0, 0,1, 1,0, 0,0 ]), 2 ) );
scene.add( new THREE.Mesh( geom, simulationMaterial ) );
//6 the particles:
//create a vertex buffer of size width * height with normalized coordinates
var l = (width * height );
var vertices = new Float32Array( l * 3 );
for ( var i = 0; i < l; i++ ) {
var i3 = i * 3;
vertices[ i3 ] = ( i % width ) / width ;
vertices[ i3 + 1 ] = ( i / width ) / height;
}
//create the particles geometry
var geometry = new THREE.BufferGeometry();
geometry.addAttribute( 'position', new THREE.BufferAttribute( vertices, 3 ) );
//the rendermaterial is used to render the particles
exports.particles = new THREE.Points( geometry, renderMaterial );
exports.particles.frustumCulled = false;
exports.renderer = renderer;
};
//7 update loop
exports.update = function(){
//1 update the simulation and render the result in a target texture
// exports.renderer.render( scene, orthoCamera, rtt, true );
exports.renderer.setRenderTarget( rtt );
exports.renderer.render( scene, orthoCamera );
exports.renderer.setRenderTarget( null );
//2 use the result of the swap as the new position for the particles' renderer
// had to add .texture on the end of rtt for r103
exports.particles.material.uniforms.positions.value = rtt.texture;
};
return exports;
}({});
The following are the shaders it uses:
<script type="x-shader/x-vertex" id="simulation_vs">
//vertex shader
varying vec2 vUv;
void main() {
vUv = vec2(uv.x, uv.y);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="simulation_fs">
//fragment Shader
uniform sampler2D positions;//DATA Texture containing original positions
varying vec2 vUv;
void main() {
//basic simulation: displays the particles in place.
vec3 pos = texture2D( positions, vUv ).rgb;
// we can move the particle here
gl_FragColor = vec4( pos,1.0 );
}
</script>
<script type="x-shader/x-vertex" id="render_vs">
//vertex shader
uniform sampler2D positions;//RenderTarget containing the transformed positions
uniform float pointSize;//size
void main() {
//the mesh is a nomrliazed square so the uvs = the xy positions of the vertices
vec3 pos = texture2D( positions, position.xy ).xyz;
//pos now contains a 3D position in space, we can use it as a regular vertex
//regular projection of our position
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos, 1.0 );
//sets the point size
gl_PointSize = pointSize;
}
</script>
<script type="x-shader/x-fragment" id="render_fs">
//fragment shader
void main()
{
gl_FragColor = vec4( vec3( 1. ), .25 );
}
</script>
I understand that I would move the particles in the "simulation_fs", but if I move a particle in that shader, if I try to do something like this,
pos.x += 1.0;
it will still only shift it one unit from the original texture position. I want the movement to be cumulative.
Would using a 2nd set of simulation shaders allow me to move the particles in a cumulative way? Is that a practical solution?
For cumulative movement, you need to use uniforms:
Look into passing a uniform named time to your vertex shader. Then you can update the time once per frame, and you can use that to animate your vertex positions. For example:
position.x = 2.0 * time; // Increment linearly
position.x = sin(time); // Sin wave back-forth animation
Without a changing variable, your vertex animations will be static from one frame to the next.
I needed to accomplish something like this, and set up an absolutely minimal example that I can tweak in the future. You'll see the positional changes are cumulative.
The following was simplified from a wonderful discussion of FBO's by Nicolas Barradeau (a webgl wizard):
// specify the container where we'll render the scene
var elem = document.querySelector('body'),
elemW = elem.clientWidth,
elemH = elem.clientHeight
// generate a scene object
var scene = new THREE.Scene();
// generate a camera
var camera = new THREE.PerspectiveCamera(75, elemW/elemH, 0.001, 100);
// generate a renderer
var renderer = new THREE.WebGLRenderer({antialias: true, alpha: true});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(elemW, elemH);
elem.appendChild(renderer.domElement);
// generate controls
var controls = new THREE.TrackballControls(camera, renderer.domElement);
// position camera and controls
camera.position.set(0.5, 0.5, -5);
controls.target = new THREE.Vector3(0.5, 0.5, 0);
/**
* FBO
**/
// verify browser agent supports "frame buffer object" features
gl = renderer.getContext();
if (!gl.getExtension('OES_texture_float') ||
gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS) == 0) {
alert(' * Cannot create FBO :(');
}
// set initial positions of `w*h` particles
var w = h = 256,
i = 0,
data = new Float32Array(w*h*3);
for (var x=0; x<w; x++) {
for (var y=0; y<h; y++) {
data[i++] = x/w;
data[i++] = y/h;
data[i++] = 0;
}
}
// feed those positions into a data texture
var dataTex = new THREE.DataTexture(data, w, h, THREE.RGBFormat, THREE.FloatType);
dataTex.minFilter = THREE.NearestFilter;
dataTex.magFilter = THREE.NearestFilter;
dataTex.needsUpdate = true;
// add the data texture with positions to a material for the simulation
var simMaterial = new THREE.RawShaderMaterial({
uniforms: { posTex: { type: 't', value: dataTex }, },
vertexShader: document.querySelector('#sim-vs').textContent,
fragmentShader: document.querySelector('#sim-fs').textContent,
});
// delete dataTex; it isn't used after initializing point positions
delete dataTex;
THREE.FBO = function(w, simMat) {
this.scene = new THREE.Scene();
this.camera = new THREE.OrthographicCamera(-w/2, w/2, w/2, -w/2, -1, 1);
this.scene.add(new THREE.Mesh(new THREE.PlaneGeometry(w, w), simMat));
};
// create a scene where we'll render the positional attributes
var fbo = new THREE.FBO(w, simMaterial);
// create render targets a + b to which the simulation will be rendered
var renderTargetA = new THREE.WebGLRenderTarget(w, h, {
wrapS: THREE.RepeatWrapping,
wrapT: THREE.RepeatWrapping,
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
format: THREE.RGBFormat,
type: THREE.FloatType,
stencilBuffer: false,
});
// a second render target lets us store input + output positional states
renderTargetB = renderTargetA.clone();
// render the positions to the render targets
renderer.render(fbo.scene, fbo.camera, renderTargetA, false);
renderer.render(fbo.scene, fbo.camera, renderTargetB, false);
// store the uv attrs; each is x,y and identifies a given point's
// position data within the positional texture; must be scaled 0:1!
var geo = new THREE.BufferGeometry(),
arr = new Float32Array(w*h*3);
for (var i=0; i<arr.length; i++) {
arr[i++] = (i%w)/w;
arr[i++] = Math.floor(i/w)/h;
arr[i++] = 0;
}
geo.addAttribute('position', new THREE.BufferAttribute(arr, 3, true))
// create material the user sees
var material = new THREE.RawShaderMaterial({
uniforms: {
posMap: { type: 't', value: null }, // `posMap` is set each render
},
vertexShader: document.querySelector('#ui-vert').textContent,
fragmentShader: document.querySelector('#ui-frag').textContent,
transparent: true,
});
// add the points the user sees to the scene
var mesh = new THREE.Points(geo, material);
scene.add(mesh);
function render() {
// at the start of the render block, A is one frame behind B
var oldA = renderTargetA; // store A, the penultimate state
renderTargetA = renderTargetB; // advance A to the updated state
renderTargetB = oldA; // set B to the penultimate state
// pass the updated positional values to the simulation
simMaterial.uniforms.posTex.value = renderTargetA.texture;
// run a frame and store the new positional values in renderTargetB
renderer.render(fbo.scene, fbo.camera, renderTargetB, false);
// pass the new positional values to the scene users see
material.uniforms.posMap.value = renderTargetB.texture;
// render the scene users see as normal
renderer.render(scene, camera);
controls.update();
requestAnimationFrame(render);
};
render();
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/101/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/TrackballControls.js"></script>
<!-- The simulation shaders update positional attributes -->
<script id='sim-vs' type='x-shader/x-vert'>
precision mediump float;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec2 uv; // x,y offsets of each point in texture
attribute vec3 position;
varying vec2 vUv;
void main() {
vUv = vec2(uv.x, 1.0 - uv.y);
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script id='sim-fs' type='x-shader/x-frag'>
precision mediump float;
uniform sampler2D posTex;
varying vec2 vUv;
void main() {
// read the supplied x,y,z vert positions
vec3 pos = texture2D(posTex, vUv).xyz;
// update the positional attributes here!
pos.x += cos(pos.y) / 100.0;
pos.y += tan(pos.x) / 100.0;
// render the new positional attributes
gl_FragColor = vec4(pos, 1.0);
}
</script>
<!-- The ui shaders render what the user sees -->
<script id='ui-vert' type='x-shader/x-vert'>
precision mediump float;
uniform sampler2D posMap; // contains positional data read from sim-fs
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec2 position;
void main() {
// read this particle's position, which is stored as a pixel color
vec3 pos = texture2D(posMap, position.xy).xyz;
// project this particle
vec4 mvPosition = modelViewMatrix * vec4(pos, 1.0);
gl_Position = projectionMatrix * mvPosition;
// set the size of each particle
gl_PointSize = 0.3 / -mvPosition.z;
}
</script>
<script id='ui-frag' type='x-shader/x-frag'>
precision mediump float;
void main() {
gl_FragColor = vec4(0.0, 0.5, 1.5, 1.0);
}
</script>

How to make a wave pattern circularly around a cylinder?

Hello i'm trying to make a wave pattern on the surface of a cylinder. The waves should rotate with the rotation of the surface. and in a way the sine period is moving in circles, and the amplitudes are long mounds on the surface. Here's some pictures to better explain what i mean.
This is what i'm trying to get the top down view of the cylinder to look similar to:
this is the top view of my cylinder. I'd like the wave to change direction with the rotation of the circle, so it looks the same from all directions.
I feel like i'm very close, i'm just not sure what quaternion or angle to multiply against the vector:
var geometry = this.threeDHandler.threeD_meshes[0].geometry;
var vec3 = new THREE.Vector3(); // temp vector
for (let i = 0; i < geometry.vertices.length; i++) {
vec3.copy(geometry.vertices[i]); // copy current vertex to the temp vector
vec3.setX(0);
vec3.normalize(); // normalize
//part i'm confsude about
const quaternion = new THREE.Quaternion();
const xPos = geometry.vertices[i].x;
//trying to twist the sine around the circle
const twistAmount = 100;
const upVec = new THREE.Vector3(0, 0, 1);
quaternion.setFromAxisAngle(
upVec,
(Math.PI / 180) * (xPos / twistAmount)
);
vec3.multiplyScalar(Math.sin((geometry.vertices[i].x* Math.PI) * period) * amplitude) // multiply with sin function
geometry.vertices[i].add(vec3); // add the temp vector to the current vertex
geometry.vertices[i].applyQuaternion(quaternion);
}
geometry.verticesNeedUpdate = true;
geometry.computeVertexNormals();
You can use absolute value of the sin function of the angle, that a vertex belongs to.
In this case you can use THREE.Spherical() object that allows to get spherical coordinates of a vector:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.set(0, 0, 6);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
var cylinderGeom = new THREE.CylinderGeometry(1, 1, 4, 128, 40, true);
var vec3 = new THREE.Vector3(); // temp vector
var vec3_2 = new THREE.Vector3(); // temp vector 2
var spherical = new THREE.Spherical();
cylinderGeom.vertices.forEach(v => {
vec3.copy(v); // copy current vertex to the temp vector
vec3.setY(0); // leave x and z (thus the vector is parallel to XZ plane)
vec3.normalize(); // normalize
vec3.multiplyScalar(Math.sin(v.y * Math.PI) * 0.25) // multiply with sin function
// radial wave
vec3_2.copy(v).setY(0).normalize();
spherical.setFromVector3(vec3_2);
vec3_2.setLength(Math.abs(Math.sin((spherical.theta * 4) + v.y * 2) * 0.25));
v.add(vec3).add(vec3_2); // add the temp vectors to the current vertex
})
cylinderGeom.computeVertexNormals();
var cylinder = new THREE.Mesh(cylinderGeom, new THREE.MeshNormalMaterial({
side: THREE.DoubleSide,
wireframe: false
}));
scene.add(cylinder);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
})
body {
overflow: hidden;
margin: 0;
}
<script src="https://cdn.jsdelivr.net/npm/three#0.124.0/build/three.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.124.0/examples/js/controls/OrbitControls.js"></script>

Weld edge vertices of BoxBufferGeometry

I am trying to create terrain in the shape of a cube which will allow for vertex displacement along the y‑axis of those on the top plane. All vertices adjacent to those of the top plane need to be connected.
In a performant manner, user input from either desktop or mobile would move them up or down.
From what I have read it is better to offload expensive operations to the GPU. I thought achieving the vertex displacement in a ShaderMaterial with a displacement attribute seemed like a perfect fit until I read the following:
As of THREE r72, directly assigning attributes in a ShaderMaterial is no longer supported. A BufferGeometry instance (instead of a Geometry instance) must be used instead.
So it seems that using attribute for my Geometry is out of the question?
My attempt at displacing the vertices along the top plane using BufferGeometry in the ShaderMaterial however results in the following:
The top plane's vertices of the BufferGeometry are not connected to the other planes, contrary to those of the Geometry, which are connected by using its mergeVertices method. To my knowledge that method is not available for BufferGeometry objects?
Basically what started my fear, uncertainty and doubt concerning Geometry was a post I read by mrdoob.
Summary
I already have this working for Geometry, but would like to make use of the GPU with ShaderMaterial's attributes, seemingly only supported by BufferGeometry, if it offers performance benefits for mobile and if Geometry might be deprecated in the future.
Here is a small snippet illustrating the issue:
let winX = window.innerWidth;
let winY = window.innerHeight;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(50, winX / winY, 0.1, 100);
camera.position.set(2, 1, 2);
camera.lookAt(scene.position);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(winX, winY);
document.body.appendChild(renderer.domElement);
const terrainGeo = new THREE.BoxBufferGeometry(1, 1, 1);
const terrainMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float displacement;
varying vec3 dPosition;
void main() {
dPosition = position;
dPosition.y += displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4(dPosition, 1.0);
}
`,
fragmentShader: `
void main() {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
`
});
const terrainObj = new THREE.Mesh(terrainGeo, terrainMat);
let displacement = new Float32Array(terrainObj.geometry.attributes.position.count);
displacement.forEach((elem, index) => {
// Select vertex 8 - 11, the top of the cube
if (index >= 8 && index <= 11) {
displacement[index] = Math.random() * 0.1 + 0.25;
}
});
terrainObj.geometry.addAttribute('displacement',
new THREE.BufferAttribute(displacement, 1)
);
scene.add(camera);
scene.add(terrainObj);
const render = () => {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
const gui = new dat.GUI();
const updateBufferAttribute = () => {
terrainObj.geometry.attributes.displacement.needsUpdate = true;
};
gui.add(displacement, 8).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 9).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 10).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 11).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.5.1/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r76/three.min.js"></script>
<style type="text/css">body { margin: 0 } canvas { display: block }</style>

Resources