Best way to update THREE.Points Geometry - three.js

I have a blender model that has some shape keys on it. I export this model and use it in threejs. Lets say this is a cube. Then I create Three.Points that draws smalls spheres on the vertices of this model (i.e Cube).
var cube = loadGLTFFromBlender(gltfFile); // This loads the model
var points = new THREE.Points(cube.geometry, pointMaterial); //got my points
Now using the morphTargets, I animate the cube. So far so good.
//Some code that uses the shapee keys
cube.morphTargetInfluences[0] = 0.2; // <-- I animate this value
but I was expecting the points will also morph/animate since the points are using the same Geometry. But does not seem to work out of box.
Some Googling suggested that morphing happens at the GPU level and the geometry data in object, is not really modified. In one of the StackOverflow questions(Three.js: Get updated vertices with morph targets), #WestLangley suggested doing the same Geometry updates on the CPU and then using it.
My questions:
Are there any other options in approaching this problem.
The CPU morph calculation seems slower and I am trying to see if there is any other way to keep my Three.Points in sync with the morphed cube? The cube animated with morphtargets and I am expecting the Three.Points to do the same.
I guess #1 is not really a option if the shape key animations in blender are more involved?
Ideally, something like the following will help:
cube.morphTargetInfluences[0] = 0.2; // <-- I animate this value
var morphedGeometry = getMorphedGeometry(cube);
syncPointsGeometry(morphedGeometry);

Answering my own question, incase this is helpful to others in future.
I have able to find couple of solutions:
CPU based (Slow) - using a function to copy the morphs into the positions
function updatePointsGeometry(points,weight) {
if (!points.geometry.attributes.morphTarget0) {
return;
}
for (var i = 0; i < points.geometry.attributes.position.count; i++) {
points.geometry.attributes.position.array[i+0] = weight * points.geometry.attributes.morphTarget0.array[i+0];
points.geometry.attributes.position.array[i+1] = weight * points.geometry.attributes.morphTarget0.array[i+1];
points.geometry.attributes.position.array[i+2] = weight * points.geometry.attributes.morphTarget0.array[i+2];
}
points.geometry.attributes.position.needsUpdate = true;
}
GPU based (Recommended) - using a custom shader that uses the morphtargets and morphinfluence to change the points
var shaderPoint = THREE.ShaderLib.points;
var uniforms = THREE.UniformsUtils.clone(shaderPoint.uniforms);
uniforms.map.value = imageTexture;
uniforms.size.value = 10;
//Custom Shader
customPointMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
morphTargets: true,
defines: {
USE_MAP: ""
},
transparent: false,
opacity: 1,
alphaTest: 0.5,
fog: false,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: shaderPoint.fragmentShader
});
//copy over the morph data from original geometry to the points
myPoints = new THREE.Points(mesh.geometry, customPointMaterial);
myPoints.morphTargetInfluences = mesh.morphTargetInfluences;
myPoints.morphTargetDictionary = mesh.morphTargetDictionary;
And here is the vertex Shader
uniform float size;
uniform float scale;
uniform float morphTargetInfluences[ 4 ];
#include <common>
#include <color_pars_vertex>
#include <fog_pars_vertex>
#include <shadowmap_pars_vertex>
#include <logdepthbuf_pars_vertex>
#include <clipping_planes_pars_vertex>
void main() {
#include <color_vertex>
#include <begin_vertex>
#include <project_vertex>
#ifdef USE_SIZEATTENUATION
gl_PointSize = size * ( scale / - mvPosition.z );
#else
gl_PointSize = size;
#endif
vec3 morphed = vec3( 0.0 , 0.0 , 0.0 );
morphed += ( morphTarget0 - position ) * morphTargetInfluences[0];
morphed += position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( morphed, 1.0 );
#include <logdepthbuf_vertex>
#include <clipping_planes_vertex>
#include <worldpos_vertex>
#include <shadowmap_vertex>
#include <fog_vertex>
}

Related

Autodesk Forge Viewer - smoke particle effect in Forge Viewer

I've been trying to make a smoke particle effect in Forge Viewer for several days. I want to achieve something like this in the Forge Viewer. I find some threejs particle engine samples. But all of them can't use in version r71(which used by Forge Viewer). So I decided to write my own particle engine. But there's a problem I can't figured it out why.
At first ,I've tried it in threejs (not in Forge Viewer)(with version r71 of course) and I can do something like this. It seems good for me, I think I can start writing my particle engine. But when I test it in Forge Viewer, things aren't going well.
Back to Forge Viewer, I have tested point cloud with custom shader and it worked well in the Forge Viewer. I can customize every attributes such as color, size, position to every single point. But when I try to add an image texture using texture2D in my fragmentShader. The browser shows me some warnings and nothing show on the viewer.
Here are the warnings showed by browser:
WebGL: INVALID_OPERATION: getUniformLocation: program not linked
WebGL: INVALID_OPERATION: getAttribLocation: program not linked
WebGL: INVALID_OPERATION: useProgram: program not valid
vertexShader:
attribute float customSize;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = customSize;
}
fragmentShader:
uniform sampler2D texture;
void main() {
vec4 color = texture2D(texture,gl_PointCoord); //when I comment out this line, everything works well
gl_FragColor = color;
}
function for create points:
createPoints() {
const width = 100;
const height = 100;
const pointCount = width * height;
const positions = new Float32Array(pointCount * 3);
//const colors = new Float32Array(pointCount * 4);
const sizes = new Float32Array(pointCount);
const geometry = new THREE.BufferGeometry();
const material = new THREE.ShaderMaterial({
uniforms: {
texture: { type: "t", value: this.particleTexture }
},
vertexShader: this.vertexShader,
fragmentShader: this.fragmentShader,
transparent: true,
depthTest: true,
blending: THREE.NormalBlending
});
material.supportsMrtNormals = true;
let i = 0;
for (var x = 0; x < width; x++) {
for (var y = 0; y < height; y++) {
const u = x / width, v = y / height;
positions[i * 3] = u * 20;
positions[i * 3 + 1] = v * 20;
positions[i * 3 + 2] = Math.sin(u * 20) + Math.cos(v * 20);
sizes[i] = 1 + THREE.Math.randFloat(1, 5);
colors[i * 4] = THREE.Math.random();
colors[i * 4 + 1] = THREE.Math.random();
colors[i * 4 + 2] = THREE.Math.random();
colors[i * 4 + 3] = 1;
i++;
}
}
//const colorsAttribute = new THREE.BufferAttribute(colors, 4);
//colorsAttribute.normalized = true;
geometry.addAttribute("position", new THREE.BufferAttribute(positions, 3));
geometry.addAttribute("customSize", new THREE.BufferAttribute(sizes, 1));
//geometry.addAttribute("customColor", colorsAttribute);
geometry.computeBoundingBox();
geometry.isPoints = true;
points = new THREE.PointCloud(geometry, material);
viewer.impl.createOverlayScene('pointclouds');
viewer.impl.addOverlay('pointclouds', points);
}
in the createPoints() function, this.particleTexture comes from :
THREE.ImageUtils.loadTexture("../img/smokeparticle.png")
vertexShader ,fragmentShader and the createPoints() function are all the same between threejs testing app on browser(not in Forge Viewer) and in Forge Viewer app. But it works well only when it's not running in Forge Viewer.
I have searched a lot of tutorials and blogs, but just can't find a solution that fits me. Can anyone help? Or maybe there's a better way to make smoke effect in Forge Viewer? Thx for help!
(If I missed some information just tell me. I would update them!)
I tried your code and I managed to make it working changing the uniform name : texture by tex.
I think texture is a reserved word in WebGL2.
If you switch to WebGL1 (as describe in this article : Custom shader materials in Forge) it works.
Fragment shader :
uniform sampler2D tex;
void main() {
vec4 color = texture2D(tex,gl_PointCoord);
gl_FragColor = color;
}
Replace uniform name in material constructor :
const material = new THREE.ShaderMaterial({
uniforms: {
tex: { type: "t", value: this.particleTexture }
},
vertexShader: this.vertexShader,
fragmentShader: this.fragmentShader,
transparent: true,
depthTest: true,
blending: THREE.NormalBlending
});
Can you try it ?
My colleague blogged about adding 3D markups to the viewer using THREE.Points and custom shader with textures: https://forge.autodesk.com/blog/3d-markup-icons-and-info-card. This sounds quite close to what you're trying to do.

Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
I've been trying to figure this answer out. My questions are
What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?
Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?
Based on your previous quesiton...
First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.
Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.
Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.
As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.
So again it requires a custom shader to keep the points facing the camera.
To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute
Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute
We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.
At draw time you can think of it like this
for each instance
set size to the next value in the size attribute
set color to the next value in the color attribute
set center to the next value in the center attribute
call the vertex shader 6 times, with position and uv set to the nth value in their attributes.
In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.
If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.
Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.
However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.
I have an article written on this topic.

Glow effect shader works on desktop but not on mobile

I'm currently working on a personal project to generate a planet using procedural methods. The problem is that I am trying to achieve a glow effect using glsl. The intended effect works for desktop but not for mobile.
The following link illustrate the problem:
Intended Effect
iPhone6S result
The planet are composed as follows: Four IcosahedronBufferGeometry meshes composing earth, water, cloud and glow effect. I have tried disabling glow effect, then it works as intended for mobile. Therefore, the conclusion is that the problem lies within the glow effect.
Here are the code for the glow effect (fragment and vertex shader):
Vertex shader:
varying float intensity;
void main() {
/* Calculates dot product of the view vector (cameraPosition) and the normal */
/* High value exponent = less intense since dot product is always less than 1 */
vec3 vNormal = vec3(modelMatrix * vec4(normal, 0.0));
intensity = pow(0.2 - dot(normalize(cameraPosition), vNormal), 2.8);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Fragment Shader
varying float intensity;
void main() {
vec3 glow = vec3(255.0/255.0, 255.0/255.0, 255.0/255.0) * intensity;
gl_FragColor = vec4( glow, 1.0);
}
THREE.js Code
var glowMaterial, glowObj, glowUniforms, sUniforms;
sUniforms = sharedUniforms();
/* Uniforms */
glowUniforms = {
lightPos: {
type: sUniforms["lightPos"].type,
value: sUniforms["lightPos"].value,
}
};
/* Material */
glowMaterial = new THREE.ShaderMaterial({
uniforms: THREE.UniformsUtils.merge([
THREE.UniformsLib["ambient"],
THREE.UniformsLib["lights"],
glowUniforms
]),
vertexShader: glow_vert,
fragmentShader: glow_frag,
lights: true,
side: THREE.FrontSide,
blending: THREE.AdditiveBlending,
transparent: true
});
/* Add object to scene */
glowObj = new THREE.Mesh(new THREE.IcosahedronBufferGeometry(35, 4), glowMaterial);
scene.add(glowObj);
There are no error/warning messages in the console both for desktop and mobile using remote web inspector. As previously shown, it seems for mobile, the glow is plain white meanwhile for desktop, the intensity/color/opactiy of the material based on the value of the dot product in vertex shader works.

Three.js custom Shader with Texture

I want to write a custom shader which manipulates my image with three.js.
For that I want to create a plane with the image as a texture. Afterwards I want to move vertices around to distort the image.
(If that an absolute wrong way to do this, please tell me).
First I have my shaders:
<script type="x-shader/x-vertex" id="vertexshader">
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
// Pass the texcoord to the fragment shader.
v_texCoord = a_texCoord;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = color;
}
</script>
Where I don't really understand what the texture2D is doing, but I found that in other code fragments.
What I want with this sample: Just color the vertex (gl_FracColor) with the color from the «underlying» image (=texture).
In my code I have setup a normal three scene with a plane:
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = window.innerWidth/window.innerHeight,
NEAR = 0.1,
FAR = 1000;
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.position.set(0, 0, 15);
var vertShader = document.getElementById('vertexshader').innerHTML;
var fragShader = document.getElementById('fragmentshader').innerHTML;
var texloader = new THREE.TextureLoader();
var texture = texloader.load("img/color.jpeg");
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2()}
};
// create the final material
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: vertShader,
fragmentShader: fragShader
});
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild(renderer.domElement);
var plane = {
width: 5,
height: 5,
widthSegments: 10,
heightSegments: 15
}
var geometry = new THREE.PlaneBufferGeometry(plane.width, plane.height, plane.widthSegments, plane.heightSegments)
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var plane = new THREE.Mesh( geometry, shaderMaterial );
scene.add(plane);
plane.rotation.y += 0.2;
var render = function () {
requestAnimationFrame(render);
// plane.rotation.x += 0.1;
renderer.render(scene, camera);
};
render();
Unfortunately, after running that code I just see a black window. Although I know that if I use the material as material when creating the mesh, I can see it clearly.
So it must be the shaderMaterial or the shaders.
Questions:
do I have to define the uniform u_texture and the attribute
a_texCoord in my shader Material uniforms and attributes? And do
they have to have the exact same name?
How many vertices are there anyway? Will I get a vertices for every pixel in the image? Or is it just 4 for each corner of the plane?
What value does a_texCoord have? Nothing happens if I write:
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2(1,1)}
};
Or do I have to use some mapping (built in map stuff from three)? But how would I then change vertex positions?
Could someone shed some light on that matter?
I got it to work by changing this:
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
To this:
var uniforms = {
u_texture: {type: 't', value: texture},
};
Anyway all other questions are still open and answers highly appreciated.
(btw: why the downgrade of someone?)
do I have to define the uniform u_texture and the attribute a_texCoord
in my shader Material uniforms and attributes? And do they have to
have the exact same name?
Yes and yes. The uniforms are defined as part of the shader-material while the attributes haven been moved from shader-material to the BufferGeometry-class in version 72 (i'm assuming you are using an up to date version, so here is how you do this today):
var geometry = new THREE.PlaneBufferGeometry(...);
// first, create an array to hold the a_texCoord-values per vertex
var numVertices = (plane.widthSegments + 1) * (plane.heightSegments + 1);
var texCoordBuffer = new Float32Array(2 * numVertices);
// now register it as a new attribute (the 2 here indicates that there are
// two values per element (vec2))
geometry.addAttribute('a_texCoord', new THREE.BufferAttribute(texCoordBuffer, 2));
As you can see, the attribute will only work if it has the exact same name as specified in your shader-code.
I don't know exactly what you are planning to use this for, but it sounds suspiciously like you want to have the uv-coordinates. If that is the case, you can save yourself a lot of work if you have a look at the THREE.PlaneBufferGeometry-class. It already provides an attribute named uv that is probably exactly what you are looking for. So you just need to change the attribute-name in your shader-code to
attribute vec2 uv;
How many vertices are there anyway? Will I get a vertices for every
pixel in the image? Or is it just 4 for each corner of the plane?
The vertices are created according to the heightSegments and widthSegments parameters. So if you set both to 5, there will be (5 + 1) * (5 + 1) = 36 vertices (+1 because a line with only 1 segment has two vertices etc.) and 5 * 5 * 2 = 50 triangles (with 150 indices) in total.
Another thing to note is that the PlaneBufferGeometry is an indexed geometry. This means that every vertex (and every other attribute-value) is stored only once, although it is used by multiple triangles. There is a special index-attribute that contains the information which vertices are used to create which triangles.
What value does a_texCoord have? Nothing happens if I write: ...
I hope the above helps to answer that.
Or do I have to use some mapping (built in map stuff from three)?
I would suggest you use the uv attribute as described above. But you absolutely don't have to.
But how would I then change vertex positions?
There are at least two ways to do this: in the vertex-shader or via javascript. The latter can be seen here: http://codepen.io/usefulthink/pen/vKzRKr?editors=1010
(the relevant part for updating the geometry starts in line 84).

Weld edge vertices of BoxBufferGeometry

I am trying to create terrain in the shape of a cube which will allow for vertex displacement along the y‑axis of those on the top plane. All vertices adjacent to those of the top plane need to be connected.
In a performant manner, user input from either desktop or mobile would move them up or down.
From what I have read it is better to offload expensive operations to the GPU. I thought achieving the vertex displacement in a ShaderMaterial with a displacement attribute seemed like a perfect fit until I read the following:
As of THREE r72, directly assigning attributes in a ShaderMaterial is no longer supported. A BufferGeometry instance (instead of a Geometry instance) must be used instead.
So it seems that using attribute for my Geometry is out of the question?
My attempt at displacing the vertices along the top plane using BufferGeometry in the ShaderMaterial however results in the following:
The top plane's vertices of the BufferGeometry are not connected to the other planes, contrary to those of the Geometry, which are connected by using its mergeVertices method. To my knowledge that method is not available for BufferGeometry objects?
Basically what started my fear, uncertainty and doubt concerning Geometry was a post I read by mrdoob.
Summary
I already have this working for Geometry, but would like to make use of the GPU with ShaderMaterial's attributes, seemingly only supported by BufferGeometry, if it offers performance benefits for mobile and if Geometry might be deprecated in the future.
Here is a small snippet illustrating the issue:
let winX = window.innerWidth;
let winY = window.innerHeight;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(50, winX / winY, 0.1, 100);
camera.position.set(2, 1, 2);
camera.lookAt(scene.position);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(winX, winY);
document.body.appendChild(renderer.domElement);
const terrainGeo = new THREE.BoxBufferGeometry(1, 1, 1);
const terrainMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float displacement;
varying vec3 dPosition;
void main() {
dPosition = position;
dPosition.y += displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4(dPosition, 1.0);
}
`,
fragmentShader: `
void main() {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
`
});
const terrainObj = new THREE.Mesh(terrainGeo, terrainMat);
let displacement = new Float32Array(terrainObj.geometry.attributes.position.count);
displacement.forEach((elem, index) => {
// Select vertex 8 - 11, the top of the cube
if (index >= 8 && index <= 11) {
displacement[index] = Math.random() * 0.1 + 0.25;
}
});
terrainObj.geometry.addAttribute('displacement',
new THREE.BufferAttribute(displacement, 1)
);
scene.add(camera);
scene.add(terrainObj);
const render = () => {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
const gui = new dat.GUI();
const updateBufferAttribute = () => {
terrainObj.geometry.attributes.displacement.needsUpdate = true;
};
gui.add(displacement, 8).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 9).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 10).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 11).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.5.1/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r76/three.min.js"></script>
<style type="text/css">body { margin: 0 } canvas { display: block }</style>

Resources