Weld edge vertices of BoxBufferGeometry - three.js

I am trying to create terrain in the shape of a cube which will allow for vertex displacement along the y‑axis of those on the top plane. All vertices adjacent to those of the top plane need to be connected.
In a performant manner, user input from either desktop or mobile would move them up or down.
From what I have read it is better to offload expensive operations to the GPU. I thought achieving the vertex displacement in a ShaderMaterial with a displacement attribute seemed like a perfect fit until I read the following:
As of THREE r72, directly assigning attributes in a ShaderMaterial is no longer supported. A BufferGeometry instance (instead of a Geometry instance) must be used instead.
So it seems that using attribute for my Geometry is out of the question?
My attempt at displacing the vertices along the top plane using BufferGeometry in the ShaderMaterial however results in the following:
The top plane's vertices of the BufferGeometry are not connected to the other planes, contrary to those of the Geometry, which are connected by using its mergeVertices method. To my knowledge that method is not available for BufferGeometry objects?
Basically what started my fear, uncertainty and doubt concerning Geometry was a post I read by mrdoob.
Summary
I already have this working for Geometry, but would like to make use of the GPU with ShaderMaterial's attributes, seemingly only supported by BufferGeometry, if it offers performance benefits for mobile and if Geometry might be deprecated in the future.
Here is a small snippet illustrating the issue:
let winX = window.innerWidth;
let winY = window.innerHeight;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(50, winX / winY, 0.1, 100);
camera.position.set(2, 1, 2);
camera.lookAt(scene.position);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(winX, winY);
document.body.appendChild(renderer.domElement);
const terrainGeo = new THREE.BoxBufferGeometry(1, 1, 1);
const terrainMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float displacement;
varying vec3 dPosition;
void main() {
dPosition = position;
dPosition.y += displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4(dPosition, 1.0);
}
`,
fragmentShader: `
void main() {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
`
});
const terrainObj = new THREE.Mesh(terrainGeo, terrainMat);
let displacement = new Float32Array(terrainObj.geometry.attributes.position.count);
displacement.forEach((elem, index) => {
// Select vertex 8 - 11, the top of the cube
if (index >= 8 && index <= 11) {
displacement[index] = Math.random() * 0.1 + 0.25;
}
});
terrainObj.geometry.addAttribute('displacement',
new THREE.BufferAttribute(displacement, 1)
);
scene.add(camera);
scene.add(terrainObj);
const render = () => {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
const gui = new dat.GUI();
const updateBufferAttribute = () => {
terrainObj.geometry.attributes.displacement.needsUpdate = true;
};
gui.add(displacement, 8).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 9).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 10).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 11).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.5.1/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r76/three.min.js"></script>
<style type="text/css">body { margin: 0 } canvas { display: block }</style>

Related

Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
I've been trying to figure this answer out. My questions are
What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?
Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?
Based on your previous quesiton...
First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.
Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.
Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.
As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.
So again it requires a custom shader to keep the points facing the camera.
To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute
Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute
We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.
At draw time you can think of it like this
for each instance
set size to the next value in the size attribute
set color to the next value in the color attribute
set center to the next value in the center attribute
call the vertex shader 6 times, with position and uv set to the nth value in their attributes.
In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.
If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.
Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.
However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.
I have an article written on this topic.

Particle system design using Three.js and Shader

I'm very new to this community. As i'm asking question if there is something i claim not right, please correct me.
Now to the point, i'm design a particle system using Three.js library, particularly i'm using THREE.Geometry() and control the vertex using shader. I want my particle movement restricted inside a box, which means when a particle crosses over a face of the box, it new position will be at the opposite side of that face.
Here's how i approach, in the vertex shader:
uniform float elapsedTime;
void main() {
gl_PointSize = 3.2;
vec3 pos = position;
pos.y -= elapsedTime*2.1;
if( pos.y < -100.0) {
pos.y = 100.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0 );
}
The ellapsedTime is sent from javascript animation loop via uniform. And the y position of each vertex will be update corresponding to the time. As a test, i want if a particle is lower than the bottom plane ( y = -100) it will move to the top plane. That was my plan. And this is the result after they all reach the bottom:
Start to fall
After reach the bottom
So, what am i missing here?
You can achieve it, using mod function:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 300);
var renderer = new THREE.WebGLRenderer({
antialis: true
});
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
var gridTop = new THREE.GridHelper(200, 10);
gridTop.position.y = 100;
var gridBottom = new THREE.GridHelper(200, 10);
gridBottom.position.y = -100;
scene.add(gridTop, gridBottom);
var pts = [];
for (let i = 0; i < 1000; i++) {
pts.push(new THREE.Vector3(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5).multiplyScalar(100));
}
var geom = new THREE.BufferGeometry().setFromPoints(pts);
var mat = new THREE.PointsMaterial({
size: 2,
color: "aqua"
});
var uniforms = {
time: {
value: 0
},
highY: {
value: 100
},
lowY: {
value: -100
}
}
mat.onBeforeCompile = shader => {
shader.uniforms.time = uniforms.time;
shader.uniforms.highY = uniforms.highY;
shader.uniforms.lowY = uniforms.lowY;
console.log(shader.vertexShader);
shader.vertexShader = `
uniform float time;
uniform float highY;
uniform float lowY;
` + shader.vertexShader;
shader.vertexShader = shader.vertexShader.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
float totalY = highY - lowY;
transformed.y = highY - mod(highY - (transformed.y - time * 20.), totalY);
`
);
}
var points = new THREE.Points(geom, mat);
scene.add(points);
var clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
uniforms.time.value = clock.getElapsedTime();
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>
You can not change state in a shader. vertex shaders only output is gl_Position (to generate points/lines/triangles) and varyings that get passed to the fragment shader. Fragment shader's only output is gl_FragColor (in general). So trying to change pos.y will do nothing. The moment the shader exits your change is forgotten.
For your particle code though you could make the position a repeating function of the time
const float duration = 5.0;
float t = fract(elapsedTime / duration);
pos.y = mix(-100.0, 100.0, t);
Assuming elapsedTime is in seconds then pos.y will go from -100 to 100 over 5 seconds and repeat.
Note in this case all the particles will fall at the same time. You could add an attribute to give them each a different time offsets or you could work their position into your own formula. Related to that you might find this article useful.
You could also do the particle movement in JavaScript like this example and this one, updating the positions in the Geometry (or better, BufferGeometry)
Yet another solution is to do the movement in a separate shader by storing the positions in a texture and updating them to a new texture. Then using that texture as input another set of shaders that draws particles.

Three.js custom Shader with Texture

I want to write a custom shader which manipulates my image with three.js.
For that I want to create a plane with the image as a texture. Afterwards I want to move vertices around to distort the image.
(If that an absolute wrong way to do this, please tell me).
First I have my shaders:
<script type="x-shader/x-vertex" id="vertexshader">
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
// Pass the texcoord to the fragment shader.
v_texCoord = a_texCoord;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = color;
}
</script>
Where I don't really understand what the texture2D is doing, but I found that in other code fragments.
What I want with this sample: Just color the vertex (gl_FracColor) with the color from the «underlying» image (=texture).
In my code I have setup a normal three scene with a plane:
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = window.innerWidth/window.innerHeight,
NEAR = 0.1,
FAR = 1000;
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.position.set(0, 0, 15);
var vertShader = document.getElementById('vertexshader').innerHTML;
var fragShader = document.getElementById('fragmentshader').innerHTML;
var texloader = new THREE.TextureLoader();
var texture = texloader.load("img/color.jpeg");
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2()}
};
// create the final material
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: vertShader,
fragmentShader: fragShader
});
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild(renderer.domElement);
var plane = {
width: 5,
height: 5,
widthSegments: 10,
heightSegments: 15
}
var geometry = new THREE.PlaneBufferGeometry(plane.width, plane.height, plane.widthSegments, plane.heightSegments)
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var plane = new THREE.Mesh( geometry, shaderMaterial );
scene.add(plane);
plane.rotation.y += 0.2;
var render = function () {
requestAnimationFrame(render);
// plane.rotation.x += 0.1;
renderer.render(scene, camera);
};
render();
Unfortunately, after running that code I just see a black window. Although I know that if I use the material as material when creating the mesh, I can see it clearly.
So it must be the shaderMaterial or the shaders.
Questions:
do I have to define the uniform u_texture and the attribute
a_texCoord in my shader Material uniforms and attributes? And do
they have to have the exact same name?
How many vertices are there anyway? Will I get a vertices for every pixel in the image? Or is it just 4 for each corner of the plane?
What value does a_texCoord have? Nothing happens if I write:
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2(1,1)}
};
Or do I have to use some mapping (built in map stuff from three)? But how would I then change vertex positions?
Could someone shed some light on that matter?
I got it to work by changing this:
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
To this:
var uniforms = {
u_texture: {type: 't', value: texture},
};
Anyway all other questions are still open and answers highly appreciated.
(btw: why the downgrade of someone?)
do I have to define the uniform u_texture and the attribute a_texCoord
in my shader Material uniforms and attributes? And do they have to
have the exact same name?
Yes and yes. The uniforms are defined as part of the shader-material while the attributes haven been moved from shader-material to the BufferGeometry-class in version 72 (i'm assuming you are using an up to date version, so here is how you do this today):
var geometry = new THREE.PlaneBufferGeometry(...);
// first, create an array to hold the a_texCoord-values per vertex
var numVertices = (plane.widthSegments + 1) * (plane.heightSegments + 1);
var texCoordBuffer = new Float32Array(2 * numVertices);
// now register it as a new attribute (the 2 here indicates that there are
// two values per element (vec2))
geometry.addAttribute('a_texCoord', new THREE.BufferAttribute(texCoordBuffer, 2));
As you can see, the attribute will only work if it has the exact same name as specified in your shader-code.
I don't know exactly what you are planning to use this for, but it sounds suspiciously like you want to have the uv-coordinates. If that is the case, you can save yourself a lot of work if you have a look at the THREE.PlaneBufferGeometry-class. It already provides an attribute named uv that is probably exactly what you are looking for. So you just need to change the attribute-name in your shader-code to
attribute vec2 uv;
How many vertices are there anyway? Will I get a vertices for every
pixel in the image? Or is it just 4 for each corner of the plane?
The vertices are created according to the heightSegments and widthSegments parameters. So if you set both to 5, there will be (5 + 1) * (5 + 1) = 36 vertices (+1 because a line with only 1 segment has two vertices etc.) and 5 * 5 * 2 = 50 triangles (with 150 indices) in total.
Another thing to note is that the PlaneBufferGeometry is an indexed geometry. This means that every vertex (and every other attribute-value) is stored only once, although it is used by multiple triangles. There is a special index-attribute that contains the information which vertices are used to create which triangles.
What value does a_texCoord have? Nothing happens if I write: ...
I hope the above helps to answer that.
Or do I have to use some mapping (built in map stuff from three)?
I would suggest you use the uv attribute as described above. But you absolutely don't have to.
But how would I then change vertex positions?
There are at least two ways to do this: in the vertex-shader or via javascript. The latter can be seen here: http://codepen.io/usefulthink/pen/vKzRKr?editors=1010
(the relevant part for updating the geometry starts in line 84).

Animating custom shader in webgl / three.js

I am currently learning OpenGL and stumbled across this tutorial:
http://patriciogonzalezvivo.com/2015/thebookofshaders/03/
I tried to use the snipped in webgl in order to further understand the mechanism but somehow it doesnt work and I am honestly not sure why. I am sure there must be some syntax error but what could it be? If not then how can I make this work?
To be honest im trying to understand how to implement u_time. I thought the GPU automatically has an in built timer which causes the color transition animation.
// set the scene size
var WIDTH = 400,
HEIGHT = 300;
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = WIDTH / HEIGHT,
NEAR = 0.1,
FAR = 10000;
// get the DOM element to attach to
// - assume we've got jQuery to hand
var $container = $('#container');
// create a WebGL renderer, camera
// and a scene
var renderer = new THREE.WebGLRenderer();
var camera = new THREE.Camera( VIEW_ANGLE,
ASPECT,
NEAR,
FAR );
var scene = new THREE.Scene();
// the camera starts at 0,0,0 so pull it back
camera.position.z = 300;
// start the renderer
renderer.setSize(WIDTH, HEIGHT);
// attach the render-supplied DOM element
$container.append(renderer.domElement);
// create the sphere's material
var shaderMaterial = new THREE.MeshShaderMaterial({
vertexShader: $('#vertexshader').text(),
fragmentShader: $('#fragmentshader').text()
});
// set up the sphere vars
var radius = 50, segments = 16, rings = 16;
// create a new mesh with sphere geometry -
// we will cover the sphereMaterial next!
var sphere = new THREE.Mesh(
new THREE.Sphere(radius, segments, rings),
shaderMaterial);
// add the sphere to the scene
scene.addChild(sphere);
// draw!
renderer.render(scene, camera);
<div id="container"></div>
<script type="x-shader/x-vertex" id="vertexshader">
// switch on high precision floats
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform float u_time;
void main() {
gl_FragColor = vec4(sin(u_time),0.0,0.0,1.0);
}
</script>
<script src="https://aerotwist.com/static/tutorials/an-introduction-to-shaders-part-1/demo/js/Three.js"></script>
<script src="https://code.jquery.com/jquery-2.2.4.js"></script>
You were right that you need to bind/update the value in javascript. To do that, you need to do two things:
Declare the u_time uniform ( including the type and initial value ) that is in the shader when you create the shader material.
var shaderMaterial = new THREE.MeshShaderMaterial({
uniforms: { // <- This is an object with your uniforms as keys
u_time: { type: "f", value: 0 }
},
vertexShader: $('#vertexshader').text(),
fragmentShader: $('#fragmentshader').text()
});
You need to have a render loop where you continuously update the uniform's value. Here is a basic example of a render loop which uses requestAnimationFrame() to call itself once the browser is ready to render another frame:
function draw () {
requestAnimationFrame(draw);
// Update shader's time
sphere.materials[0].uniforms.u_time.value += 0.01;
// draw!
renderer.render(scene, camera);
}
draw();
Note that you update uniforms.u_time.value not uniforms.u_time. This is because a uniform holds both it's type and it's current value.
Working jsFiddle with changes
Also know that you are using a very old version of three.js in your fiddle. Version r40 is from 2011 and we are up to r76 currently. There are some niceties in recent versions that make this simpler.

Three.js Point cloud with transparency using a shader material disappears when adding non transparent cube

The behavior happens only on firefox. (I use the developper edition).
I have some point clouds which need to use a shader with transparency activated.
When I add a CubeGeometry to the scene without transparency it makes the point cloud disappear.
I also noted that using a point cloud with a PointMaterial works as intended, but in my program I need to use shaders.
If you use shaderMaterial on the cube in this part of the code:
mesh = new THREE.Mesh(geometry, material);
//mesh = new THREE.Mesh(geometry, shaderMaterial);
The cloud appears correctly as well, but of course I need a non transparent cube with some other material than the shader of the cloud.
I'm using three.js r74
Thank you for your help!
var $ = document.querySelector.bind(document);
var camera, scene, renderer, geometry, material, mesh;
init();
animate();
function init() {
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000);
camera.position.z = 500;
scene.add(camera);
var pointMaterial = new THREE.PointsMaterial();
var vShader = $('#vertexshader');
var fShader = $('#fragmentshader');
var shaderMaterial =
new THREE.ShaderMaterial({
vertexShader: vShader.text,
fragmentShader: fShader.text
});
shaderMaterial.transparent = true;
shaderMaterial.vertexColors = THREE.VertexColors;
shaderMaterial.depthWrite = true;
geometry = new THREE.Geometry();
particleCount = 20000;
for (i = 0; i < particleCount; i++) {
var vertex = new THREE.Vector3();
vertex.x = Math.random() * 2000 - 1000;
vertex.y = Math.random() * 2000 - 1000;
vertex.z = Math.random() * 2000 - 1000;
geometry.vertices.push(vertex);
}
parameters = [
[
[1, 1, 0.5], 5
],
[
[0.95, 1, 0.5], 4
],
[
[0.90, 1, 0.5], 3
],
[
[0.85, 1, 0.5], 2
],
[
[0.80, 1, 0.5], 1
]
];
parameterCount = parameters.length;
for (i = 0; i < parameterCount; i++) {
color = parameters[i][0];
size = parameters[i][1];
//If we use pointMaterial instead of ShaderMaterial the cloud is visible
particles = new THREE.Points(geometry, shaderMaterial);
particles.sizeAttenuation = true;
particles.sortParticles = true;
particles.colorsNeedUpdate = true;
particles.scale.set(1, 1, 1);
particles.rotation.x = Math.random() * 6;
particles.rotation.y = Math.random() * 6;
particles.rotation.z = Math.random() * 6;
scene.add(particles);
}
geometry = new THREE.CubeGeometry(200, 200, 200);
//POINT CLOUD DISAPPEARS WHEN USING NON TRANSPARENT MATERIAL
material = new THREE.MeshBasicMaterial({color: 0x00ff00});
mesh = new THREE.Mesh(geometry, material);
//mesh = new THREE.Mesh(geometry, shaderMaterial);
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
mesh.rotation.x += 0.01;
mesh.rotation.y += 0.02;
renderer.render(scene, camera);
}
<script type="x-shader/x-vertex" id="vertexshader">
void main()
{
gl_PointSize = 5.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
precision highp float;
void main()
{
gl_FragColor = vec4(1.0,0.0,1.0,1.0);
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r74/three.js"></script>
It's not clear what effect you're trying to achieve
Do you want to see the points inside to cube?
Your shader is returning opaque purple
gl_FragColor = vec4(1.0,0.0,1.0,1.0);
So your particles will not be transparent regardless of the transparent setting on the material.
Your cube is Non-Transparent so of course the points inside the cube disappear. That's the definition of non-transparent.
Setting the cube to transparent won't fix the issue either. Dealing with transparency is hard. You generally need to draw things front to back. To do that three.js needs every object to be able to be drawn separately so it can first draw all the particles behind the cube, then the back of the cube, then the particles inside the cube, then the front of the cube, then the particles in front of the cube.
To do that requires you split the cube into 6 planes and put every particle in it's own scene object.
There are ways to fake it. Turning off depthTest can sometimes be used as a substitute but it won't be totally correct.

Resources