Does threejs not divide by depth? - three.js

Question: Shouldn't threejs set the w value of vertices equal to their depth?
When projecting objects onto a screen, objects that are far away from the focal point get projected further towards the center of the screen. In projective coordinates this effect is achieved by dividing a point's (x, y, z)-coordinates by its distance from the focal point, w. I've been playing around with threejs's projection matrix and it seems to me that threejs doesn't do that.
Consider the following scene:
// src/main.ts
import { AmbientLight, BoxGeometry, DirectionalLight, Mesh,
MeshPhongMaterial, PerspectiveCamera, Scene, WebGLRenderer } from "three";
const canvas = document.getElementById('canvas') as HTMLCanvasElement;
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
const renderer = new WebGLRenderer({
alpha: false,
antialias: false,
canvas: canvas,
depth: true
});
const scene = new Scene();
const camera = new PerspectiveCamera(45, canvas.width / canvas.height, 0.01, 100);
camera.position.set(0, 0, 10);
const light = new DirectionalLight();
light.position.set(-1, 0, 3);
scene.add(light);
const light2 = new AmbientLight();
scene.add(light2);
const cube = new Mesh(new BoxGeometry(1, 1, 1, 1), new MeshPhongMaterial({ color: `rgb(0, 125, 125)` }));
scene.add(cube);
cube.position.set(3.42, 3.42, 0);
renderer.render(scene, camera);
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body>
<div id="app" style="width: 600px; height: 600px;">
<canvas id="canvas" style="width: 100%; height: 100%;"></canvas>
</div>
<script type="module" src="/src/main.ts"></script>
</body>
</html>
This code yields the following image:
Note how the edges of the turquoise box appear exactly parallel to the edges of the canvas. But the front-top-right vertex is further away from my eye than the front-bottom-left vertex. Shouldn't the top-right vertex be slightly distorted towards the center?
I understand that WebGL automatically divides vertices by their w coordinate in the vertex-post-processing-phase. Shouldn't threejs have used the depth to set w so that this distortion-effect is achieved?
What I imagine is something like this:
import { AmbientLight, BoxGeometry, Mesh, MeshBasicMaterial, PerspectiveCamera,
Scene, ShaderMaterial, SpotLight, TextureLoader, Vector3, WebGLRenderer } from "three";
const canvas = document.getElementById('canvas') as HTMLCanvasElement;
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
const loader = new TextureLoader();
const texture = await loader.loadAsync('bricks.jpg');
const renderer = new WebGLRenderer({
alpha: false,
antialias: true,
canvas: canvas,
depth: true
});
const scene = new Scene();
const camera = new PerspectiveCamera(45, canvas.width / canvas.height, 0.01, 100);
camera.position.set(0, 0, -1);
camera.lookAt(new Vector3(0, 0, 0));
const light = new SpotLight();
light.position.set(-1, 0, -1);
scene.add(light);
const light2 = new AmbientLight();
scene.add(light2);
const box = new Mesh(new BoxGeometry(1, 1, 1, 50, 50, 50), new MeshBasicMaterial({
map: texture
}));
scene.add(box);
box.position.set(-0.6, 0, 1);
const box2 = new Mesh(new BoxGeometry(1, 1, 1, 50, 50, 50), new ShaderMaterial({
uniforms: {
tTexture: {value: texture }
},
vertexShader: `
varying vec2 vUv;
void main() {
vUv = uv;
vec4 clipSpacePos = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
clipSpacePos.w = length(clipSpacePos.xyz);
gl_Position = clipSpacePos;
}
`,
fragmentShader: `
varying vec2 vUv;
uniform sampler2D tTexture;
void main() {
gl_FragColor = texture2D(tTexture, vUv);
}
`,
}));
scene.add(box2);
box2.position.set(0.6, 0, 1);
renderer.render(scene, camera);
Which gives the following output:
Note how the left block displays strong distortion towards the edges.

I think I understand my mistake now.
My approach was setting w = distance for each vertex instead of w = z (which is - simplified - what the standard projection matrix does).
This has the GPU normalize all points by their distance from the focal point instead of by their distance from a straight plane. In other words: this has the GPU project all vertices onto a sphere around me instead of onto a plane in front of me.
I think the same difference becomes clearer when looking at the difference between a cube-map and an equirectangular-map(image).
Please do correct me if I'm wrong, but I guess that an equirectangular projection is obtained by projecting onto a curved surface.
Also, this stackoverflow-question does indicate that an equirectangular map needs to be undistorted - that is, straightened - before it can be used as a cube-map. (Side-note: kudos for some very illustrative images there!)
In other words: If we're going with the "project on a plane" metaphor, then three's default perspective-projection matrix, which has points normalized by z, is what we want. If we're going for the "project on a sphere" metaphor, we indeed want to normalize by distance. Since my monitor is a flat plane, the divide-by-z strategy is the appropriate thing to do.
(Of course, corrections and additions are very welcome!)

Related

Colors in THREE.WebGLRenderTarget with alpha channel are darker than expected

I'm trying to render some graphics with transparency into a WebGLRenderTarget. The rendered image is then used as texture for a second material.
I have an issue with alpha blending. The color that I obtain when alpha=0.5 is darker than expected.
The image below shows the issue:
Circle on top is what I expect. This is obtained with an HTML DIV with rounder corners and opacity=0.5
Circle on bottom is what I obtain with with a shader that renders the circle inside a texture.
I think that I'm missing something!
Part of the code is reported below. You can find the complete code in the following jsbin: https://jsbin.com/zukoyaziqe/1/edit?html,js,output
Thank you for your help!!
Shader:
const texFrag = `
varying vec2 vUv;
void main() {
vec2 center = vec2(0.5, 0.2);
float d = length(vUv - center);
if (d < 0.1) {
gl_FragColor = vec4(1.0,0.0,1.0,0.5);
}
else {
discard;
}
}
`;
Texture:
const makeTexture = (renderer, width, height) => {
const target = new THREE.WebGLRenderTarget(width, height, {minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, type: THREE.FloatType});
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(90, 1, 0.1, 100000);
const geometry = new THREE.PlaneGeometry(2, 2);
const material = new THREE.ShaderMaterial({
transparent : true,
vertexShader : simpleVert,
fragmentShader : texFrag,
});
const mesh = new THREE.Mesh(geometry, material);
camera.position.set(0, 0, 1);
scene.add(camera);
scene.add(mesh);
renderer.render(scene, camera, target, true);
return target.texture;
}
Main view:
const renderer = new THREE.WebGLRenderer({canvas});
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(90, 1, 0.1, 100000);
const geometry = new THREE.PlaneGeometry( 2, 2 );
const material = new THREE.MeshBasicMaterial({
transparent : true,
map : makeTexture(renderer, canvas.width, canvas.height)
});
const mesh = new THREE.Mesh(geometry, material);
First of all, in the example you linked, your main function is called twice, so there are two CSS circles stacked on top of each other, resulting in a less transparent circle.
Then, you're drawing a circle with color (1,0,1,0.5) on a blank render target, which, using the default blend mode (SRC_ALPHA, ONE_MINUS_SRC_ALPHA), results in (0.5,0,0.5,0.5) color, which is then used as a texture. If you want the original color in your texture, you should disable alpha blending or use a different blend mode. Simply setting transparent to false inside makeTexture does the trick.

How to prevent interpolation of vertex colors in THREE.js shader?

I am trying to write a shader that draws contour plots on meshes.
Here is an example of contour plot.
My first aim is visualizing one triangle face with different colors.
You can find the code that I am using in here.
<html lang="en">
<head>
<title>Face Contour Example</title>
</head>
<body>
<script src="http://threejs.org/build/three.min.js"></script>
<script src="http://threejs.org/examples/js/controls/OrbitControls.js"></script>
<script id="vertexShader" type="x-shader/x-vertex">
varying vec3 vColor;
void main(){
vColor = color;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
varying vec3 vColor;
void main(){
gl_FragColor = vec4( vColor.rgb, 1.0 );
}
</script>
<script type="text/javascript">
var camera, scene, renderer, mesh, material, controls;
init();
animate();
function init() {
// Renderer.
renderer = new THREE.WebGLRenderer();
//renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
// Add renderer to page
document.body.appendChild(renderer.domElement);
// Create camera.
camera = new THREE.PerspectiveCamera(70, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.z = -400;
// Create scene.
scene = new THREE.Scene();
var colors = {
"color1" : {
type : "c",
value : new THREE.Color(0xff0000) //r
},
"color2" : {
type : "c",
value : new THREE.Color(0x00ff00) //b
},
"color3" : {
type : "c",
value : new THREE.Color(0x0000ff) //g
},
};
var fShader = document.getElementById('fragmentShader').text;
var vShader = document.getElementById('vertexShader').text;
// Create material
var material = new THREE.ShaderMaterial({
vertexShader: vShader,
fragmentShader: fShader,
vertexColors: THREE.VertexColors,
});
// var material = new THREE.MeshBasicMaterial( { vertexColors: THREE.VertexColors } );
// Create cube and add to scene.
var geometry = new THREE.Geometry();
geometry.vertices=[
new THREE.Vector3(100,0,0),
new THREE.Vector3(-100,0,0),
new THREE.Vector3(50,100,0)
]
var face=new THREE.Face3();
face.a=0;
face.b=1;
face.c=2;
face.vertexColors[ 0 ] = colors["color1"].value;
face.vertexColors[ 1 ] = colors["color2"].value;
face.vertexColors[ 2 ] = colors["color3"].value;
geometry.faces=[face]
mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
function addWireFrame(){
//Create wireframe helper for mesh with same geometry
var wireframeMesh=new THREE.WireframeGeometry(geometry);
var line = new THREE.LineSegments( wireframeMesh );
line.material.depthTest = false;
line.material.opacity = 0.75;
line.material.transparent = true;
mesh.add( line );
}
addWireFrame();
//Orbit controls
controls = new THREE.OrbitControls( camera );
// Create ambient light and add to scene.
var light = new THREE.AmbientLight(0x404040); // soft white light
scene.add(light);
// Create directional light and add to scene.
var directionalLight = new THREE.DirectionalLight(0xffffff);
directionalLight.position.set(1, 1, 1).normalize();
scene.add(directionalLight);
// Add listener for window resize.
window.addEventListener('resize', onWindowResize, false);
}
function animate() {
requestAnimationFrame(animate);
controls.update();
renderer.render(scene, camera);
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
</script>
</body>
</html>
In the code I assigned red, green and blue colors to each vertices of a face.
In vertex shader, I redirected those colors to fragment shader. And In fragment shader, I am planning to use my own formula to decide which color will be used for that instance of the fragment. (My formula will depend on the position on the face.)
However, I couldn't manage to prevent interpolation of vertex colors. Is there a way to pick vertex color from an array directly without interpolation in three.js?
Also, I appreciate alternative solutions that may be suitable for my problem.
You don't want to disable interpolation. You want, instead, to use the interpolated coordinates as an index. The interpolated color value tells you how close you are to each of the vertices. You can then quantize this interpolated value into ranges, or indexes into a color array, to produce the end color.
I modified your fiddle to show the color of the closest vertex using the following pixel shader:
void main(){
vec3 c = vColor;
gl_FragColor = vec4(c.r > c.g && c.r > c.b ? 1.0 : 0.0,
c.g > c.r && c.g > c.b ? 1.0 : 0.0,
c.b > c.r && c.b > c.g ? 1.0 : 0.0,
1.0 );
}
The result looks like this:
You will need a more complex quantization method to show a contour map, but I hope this approach gives you a good start.

filling shader attributes from webgl context in threejs visualization

I'm learning about shaders using a number of existing webgl tutorials, and I was hoping there would be a way to attach a compiled shader program to a threejs shadermaterial, but I'm getting stuck. If possible it would be very nice to set attributes and uniforms using the gl methods, and then set the shader program on the material. Here's what I've tried.
<!doctype html>
<html>
<head>
<script src="http://threejs.org/build/three.min.js"></script>
<meta charset="utf-8" />
<title>Sample Three.js</title>
<style>
#container {
background: #000;
width: 400px;
height: 300px;
}
</style>
</head>
<body>
</body>
<script type="x-shader/x-vertex" id="vertexshader">
// switch on high precision floats
#ifdef GL_ES
precision highp float;
#endif
uniform mat4 projectionmyMatrix;
attribute vec3 vertexPos;
attribute float displacement;
uniform float amplitude;
void main()
{
vec3 newPos = vertexPos;
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPos,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision highp float;
#endif
void main( void ) {
gl_FragColor = vec4( 1.0,1.0,1.0,1.0);
}
</script>
<!-- End Shaders -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"></script>
<script type="text/javascript">
// set the scene size
var WIDTH = 800,
HEIGHT = 600;
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = WIDTH / HEIGHT,
NEAR = 1,
FAR = 1000;
// get the DOM element to attach to
// - assume we've got jQuery to hand
var $container = $('body');
// create a WebGL renderer, camera
// and a scene
var renderer = new THREE.WebGLRenderer();
var camera = new THREE.PerspectiveCamera(
VIEW_ANGLE,
ASPECT,
NEAR,
FAR );
var scene = new THREE.Scene();
// the camera starts at 0,0,0 so pull it back
camera.position.z = 300;
// start the renderer
renderer.setSize(WIDTH, HEIGHT);
// attach the render-supplied DOM element
$container.append(renderer.domElement);
// set up the sphere vars
var radius = 50, segments = 16, rings = 16;
// create the sphere's material
var shaderMaterial = new THREE.ShaderMaterial({
vertexShader: $('#vertexshader').text(),
fragmentShader: $('#fragmentshader').text()
});
// create a new mesh with sphere geometry -
// we will cover the sphereMaterial next!
var sphere = new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, rings),
shaderMaterial);
//filling the attribute vertex array
// add the sphere and camera to the scene
scene.add(sphere);
scene.add(camera);
renderer.compile(scene,camera)
var gl = renderer.getContext()
var sq = createSquare(gl)
var prg = shaderMaterial.program.program
var posAttr = gl.getAttribLocation(prg,'vertexPos')
// set the vertex buffer to be drawn
gl.bindBuffer(gl.ARRAY_BUFFER, sq.buffer);
// set the shader to use
gl.useProgram(prg);
// connect up the shader parameters: vertex position and projection/model matrices
gl.vertexAttribPointer(posAttr, sq.vertSize, gl.FLOAT, false, 0, 0);
renderer.compile(scene,camera)
// create a rendering loop
var frame = 0;
function update() {
frame += .01
renderer.render(scene, camera);
requestAnimationFrame(update)
}
requestAnimationFrame(update)
</script>
</html>
I would prefer not to have to translate from the tutorials into the uniforms, attributes syntax used by three.js denoted below
```
var attributes = {
displacement: {
type: 'f', // a float
value: [] // an empty array
}
};
var uniforms = {
amplitude: {
type: 'f', // a float
value: 1
}
};
var vShader = $('#vertexshader');
var fShader = $('#fragmentshader');
// create the final material
var shaderMaterial =
new THREE.MeshShaderMaterial({
uniforms: uniforms,
attributes: attributes,
vertexShader: vShader.text(),
fragmentShader: fShader.text()
});
...
No, this approach is neither recommended nor supported. Instead of using the raw WebGL context, you have two options:
You can use THREE.ShaderMaterial for a custom shader definition. three.js automatically provides some built-in attributes and uniforms (e.g modelViewMatrix or projectionMatrix) which are frequently used by vertex and fragment shaders. The official doc page provides a lot of information.
THREE.RawShaderMaterial is a more lightweight option since three.js does not provide the mentioned built-in uniforms and attributes.
The following two basic examples show the usage of both materials with the latest version of three.js (R91):
https://threejs.org/examples/#webgl_shader
https://threejs.org/examples/#webgl_buffergeometry_rawshader
I recommend to work with these examples and not with potentially outdated tutorials. For example attributes are no parameter of ShaderMaterial anymore. Instead, attribute data are part of the geometry.

Lost fragments in my shader

I'm trying to do a tilesystem in Threejs: Green for ground / Blue for water.
I'm using a shader on a PlaneBufferGeometry.
Here is what I have so far :
Relevant code :
JS: variable chunk and function DoPlaneStuff() (both at the beginning)
HTML: vertex and fragment shader
var chunk = {
// number of width and height segments for PlaneBuffer
segments: 32,
// Heightmap: 0 = water, 1 = ground
heightmap: [
[1, 0, 0],
[1, 1, 0],
[1, 0, 1],
],
// size of the plane
size: 40
};
function DoPlaneStuff() {
var uniforms = {
heightmap: {
type: "iv1",
// transform the 2d Array to a simple array
value: chunk.heightmap.reduce((p, c) => p.concat(c), [])
},
hmsize: {
type: "f",
value: chunk.heightmap[0].length
},
coord: {
type: "v2",
value: new THREE.Vector2(-chunk.size / 2, -chunk.size / 2)
},
size: {
type: "f",
value: chunk.size
}
};
console.info("UNIFORMS GIVEN :", uniforms);
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: document.getElementById("v_shader").textContent,
fragmentShader: document.getElementById("f_shader").textContent
});
var plane = new THREE.Mesh(
new THREE.PlaneBufferGeometry(chunk.size, chunk.size, chunk.segments, chunk.segments),
shaderMaterial
);
plane.rotation.x = -Math.PI / 2;
scene.add(plane);
}
// --------------------- END OF RELEVANT CODE
window.addEventListener("load", Init);
function Init() {
Init3dSpace();
DoPlaneStuff();
Render();
}
var camera_config = {
dist: 50,
angle: (5 / 8) * (Math.PI / 2)
}
var scene, renderer, camera;
function Init3dSpace() {
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer({
antialias: true,
logarithmicDepthBuffer: true
});
camera = new THREE.PerspectiveCamera(
50,
window.innerWidth / window.innerHeight,
0.1,
1000
);
this.camera.position.y = camera_config.dist * Math.sin(camera_config.angle);
this.camera.position.x = 0;
this.camera.position.z = 0 + camera_config.dist * Math.cos(camera_config.angle);
this.camera.rotation.x = -camera_config.angle;
var light = new THREE.HemisphereLight(0xffffff, 10);
light.position.set(0, 50, 0);
scene.add(light);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function Render() {
renderer.render(scene, camera);
}
body {
overflow: hidden;
margin: 0;
}
<script src="//cdnjs.cloudflare.com/ajax/libs/three.js/r70/three.min.js"></script>
<!-- VERTEX SHADER -->
<script id="v_shader" type="x-shader/x-vertex">
// size of the plane
uniform float size;
// coordinates of the geometry
uniform vec2 coord;
// heightmap size (=width and height of the heightmap)
uniform float hmsize;
uniform int heightmap[9];
varying float colorValue;
void main() {
int xIndex = int(floor(
(position.x - coord.x) / (size / hmsize)
));
int yIndex = int(floor(
(-1.0 * position.y - coord.y) / (size / hmsize)
));
// Get the index of the corresponding tile in the array
int index = xIndex + int(hmsize) * yIndex;
// get the value of the tile
colorValue = float(heightmap[index]);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<!-- FRAGMENT SHADER -->
<script id="f_shader" type="x-shader/x-fragment">
varying float colorValue;
void main() {
// default color is something is not expected: RED
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
// IF WATER
if (colorValue == 0.0) {
// BLUE
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
}
// IF GROUND
if (colorValue == 1.0) {
// GREEN
gl_FragColor = vec4( 0.1, 0.6, 0.0, 1.0 );
}
}
</script>
As you can see it's almost working, but I have these red lines splitting green and blue areas and I can't figure out why.
I call these red fragments the "lost one" because they don't map to any tile, and I can't get why.
I could only notice that with a greater value of chunk.segments (which is the number of height and width segments for the geometry) I can have thiner red lines.
I would like to know how to have a gradient fill between green and blue zones instead of red.
The red lines are formed by triangles that have some vertices lying in a ground tile and other vertices in a water tile. The GPU then interpolates the colorValue along the triangle, producing a smooth gradient with values from 0 to 1, instead of a sharp step that you probably expect.
There are several solutions for this. You can change the condition in your shader to choose the color based on the mid point: if colorValue < 0.5, output blue, otherwise green. That won't work well if you decide you want more tile types later on, though. A better solution would be to generate your geometry in a way that all vertices of all triangles lie in a single tile. That will involve doubling up vertices that lie on the tile boundaries. You can also add the flat interpolation qualifier to colorValue, but it's harder to control which vertices' attribute the triangle will end up using.
... I just noticed that you do want a gradient instead of a sharp step. That's even easier. You need to move the color selection code from the fragment shader to the vertex shader and just return the resulting interpolated color in the fragment shader.

Weld edge vertices of BoxBufferGeometry

I am trying to create terrain in the shape of a cube which will allow for vertex displacement along the y‑axis of those on the top plane. All vertices adjacent to those of the top plane need to be connected.
In a performant manner, user input from either desktop or mobile would move them up or down.
From what I have read it is better to offload expensive operations to the GPU. I thought achieving the vertex displacement in a ShaderMaterial with a displacement attribute seemed like a perfect fit until I read the following:
As of THREE r72, directly assigning attributes in a ShaderMaterial is no longer supported. A BufferGeometry instance (instead of a Geometry instance) must be used instead.
So it seems that using attribute for my Geometry is out of the question?
My attempt at displacing the vertices along the top plane using BufferGeometry in the ShaderMaterial however results in the following:
The top plane's vertices of the BufferGeometry are not connected to the other planes, contrary to those of the Geometry, which are connected by using its mergeVertices method. To my knowledge that method is not available for BufferGeometry objects?
Basically what started my fear, uncertainty and doubt concerning Geometry was a post I read by mrdoob.
Summary
I already have this working for Geometry, but would like to make use of the GPU with ShaderMaterial's attributes, seemingly only supported by BufferGeometry, if it offers performance benefits for mobile and if Geometry might be deprecated in the future.
Here is a small snippet illustrating the issue:
let winX = window.innerWidth;
let winY = window.innerHeight;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(50, winX / winY, 0.1, 100);
camera.position.set(2, 1, 2);
camera.lookAt(scene.position);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(winX, winY);
document.body.appendChild(renderer.domElement);
const terrainGeo = new THREE.BoxBufferGeometry(1, 1, 1);
const terrainMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float displacement;
varying vec3 dPosition;
void main() {
dPosition = position;
dPosition.y += displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4(dPosition, 1.0);
}
`,
fragmentShader: `
void main() {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
`
});
const terrainObj = new THREE.Mesh(terrainGeo, terrainMat);
let displacement = new Float32Array(terrainObj.geometry.attributes.position.count);
displacement.forEach((elem, index) => {
// Select vertex 8 - 11, the top of the cube
if (index >= 8 && index <= 11) {
displacement[index] = Math.random() * 0.1 + 0.25;
}
});
terrainObj.geometry.addAttribute('displacement',
new THREE.BufferAttribute(displacement, 1)
);
scene.add(camera);
scene.add(terrainObj);
const render = () => {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
const gui = new dat.GUI();
const updateBufferAttribute = () => {
terrainObj.geometry.attributes.displacement.needsUpdate = true;
};
gui.add(displacement, 8).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 9).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 10).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 11).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.5.1/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r76/three.min.js"></script>
<style type="text/css">body { margin: 0 } canvas { display: block }</style>

Resources