I'm trying to do a tilesystem in Threejs: Green for ground / Blue for water.
I'm using a shader on a PlaneBufferGeometry.
Here is what I have so far :
Relevant code :
JS: variable chunk and function DoPlaneStuff() (both at the beginning)
HTML: vertex and fragment shader
var chunk = {
// number of width and height segments for PlaneBuffer
segments: 32,
// Heightmap: 0 = water, 1 = ground
heightmap: [
[1, 0, 0],
[1, 1, 0],
[1, 0, 1],
],
// size of the plane
size: 40
};
function DoPlaneStuff() {
var uniforms = {
heightmap: {
type: "iv1",
// transform the 2d Array to a simple array
value: chunk.heightmap.reduce((p, c) => p.concat(c), [])
},
hmsize: {
type: "f",
value: chunk.heightmap[0].length
},
coord: {
type: "v2",
value: new THREE.Vector2(-chunk.size / 2, -chunk.size / 2)
},
size: {
type: "f",
value: chunk.size
}
};
console.info("UNIFORMS GIVEN :", uniforms);
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: document.getElementById("v_shader").textContent,
fragmentShader: document.getElementById("f_shader").textContent
});
var plane = new THREE.Mesh(
new THREE.PlaneBufferGeometry(chunk.size, chunk.size, chunk.segments, chunk.segments),
shaderMaterial
);
plane.rotation.x = -Math.PI / 2;
scene.add(plane);
}
// --------------------- END OF RELEVANT CODE
window.addEventListener("load", Init);
function Init() {
Init3dSpace();
DoPlaneStuff();
Render();
}
var camera_config = {
dist: 50,
angle: (5 / 8) * (Math.PI / 2)
}
var scene, renderer, camera;
function Init3dSpace() {
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer({
antialias: true,
logarithmicDepthBuffer: true
});
camera = new THREE.PerspectiveCamera(
50,
window.innerWidth / window.innerHeight,
0.1,
1000
);
this.camera.position.y = camera_config.dist * Math.sin(camera_config.angle);
this.camera.position.x = 0;
this.camera.position.z = 0 + camera_config.dist * Math.cos(camera_config.angle);
this.camera.rotation.x = -camera_config.angle;
var light = new THREE.HemisphereLight(0xffffff, 10);
light.position.set(0, 50, 0);
scene.add(light);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function Render() {
renderer.render(scene, camera);
}
body {
overflow: hidden;
margin: 0;
}
<script src="//cdnjs.cloudflare.com/ajax/libs/three.js/r70/three.min.js"></script>
<!-- VERTEX SHADER -->
<script id="v_shader" type="x-shader/x-vertex">
// size of the plane
uniform float size;
// coordinates of the geometry
uniform vec2 coord;
// heightmap size (=width and height of the heightmap)
uniform float hmsize;
uniform int heightmap[9];
varying float colorValue;
void main() {
int xIndex = int(floor(
(position.x - coord.x) / (size / hmsize)
));
int yIndex = int(floor(
(-1.0 * position.y - coord.y) / (size / hmsize)
));
// Get the index of the corresponding tile in the array
int index = xIndex + int(hmsize) * yIndex;
// get the value of the tile
colorValue = float(heightmap[index]);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<!-- FRAGMENT SHADER -->
<script id="f_shader" type="x-shader/x-fragment">
varying float colorValue;
void main() {
// default color is something is not expected: RED
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
// IF WATER
if (colorValue == 0.0) {
// BLUE
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
}
// IF GROUND
if (colorValue == 1.0) {
// GREEN
gl_FragColor = vec4( 0.1, 0.6, 0.0, 1.0 );
}
}
</script>
As you can see it's almost working, but I have these red lines splitting green and blue areas and I can't figure out why.
I call these red fragments the "lost one" because they don't map to any tile, and I can't get why.
I could only notice that with a greater value of chunk.segments (which is the number of height and width segments for the geometry) I can have thiner red lines.
I would like to know how to have a gradient fill between green and blue zones instead of red.
The red lines are formed by triangles that have some vertices lying in a ground tile and other vertices in a water tile. The GPU then interpolates the colorValue along the triangle, producing a smooth gradient with values from 0 to 1, instead of a sharp step that you probably expect.
There are several solutions for this. You can change the condition in your shader to choose the color based on the mid point: if colorValue < 0.5, output blue, otherwise green. That won't work well if you decide you want more tile types later on, though. A better solution would be to generate your geometry in a way that all vertices of all triangles lie in a single tile. That will involve doubling up vertices that lie on the tile boundaries. You can also add the flat interpolation qualifier to colorValue, but it's harder to control which vertices' attribute the triangle will end up using.
... I just noticed that you do want a gradient instead of a sharp step. That's even easier. You need to move the color selection code from the fragment shader to the vertex shader and just return the resulting interpolated color in the fragment shader.
Related
I have the following setup for my THREE.Points Object:
this.particleGeometry = new THREE.BufferGeometry()
this.particleMaterial = new THREE.ShaderMaterial(
{
vertexShader: vshader,
fragmentShader: fshader,
blending: THREE.AdditiveBlending,
depthWrite: false,
uniforms: {
uTime: new THREE.Uniform(0),
uMousePosition: this.mousePosition
}
}
)
and then some code to place points in the BufferGeometry on a sphere. That is working fine.
I also set up a Raycaster to track the mouse position intersecting a hidden plane and then update the uniform uMousePosition accordingly. That also works fine, I get the mouse position sent to my vertex shader.
Now I am trying to make the particles that are in a certain distance d to the mouse push away from it where the closest ones are pushed most of course, and also apply a gravity back to their original position to restore everything after time.
So here is what I have in my vertex shader:
void main() {
float lerp(float a, float b, float amount) {
return a + (b - a) * amount;
}
void main() {
vec3 p = position;
float dist = min(distance(p, mousePosition), 1.);
float lerpFactor = .2;
p.x = lerp(p.x, position.x * dist, lerpFactor);
p.y = lerp(p.y, position.y * dist, lerpFactor);
p.z = lerp(p.z, position.z * dist, lerpFactor);//Mouse is always in z=0
vec4 mvPosition = modelViewMatrix * vec4(p, 1.);
gl_PointSize = 30. * (1. / -mvPosition.z );
gl_Position = projectionMatrix * mvPosition;
}
}
And here is what it looks like when the mouse is outside the sphere (added a small sphere that moves with the mouseposition to indicate the mouseposition)
And here when the mouse is inside:
Outside already looks kind of correct, but mouse inside only moves the particles closer back to their original position, where it should push them further outside instead. I guess I somehow have to determine the direction of the distance.
Also, the lerp method does not lerp, the particles directly jump to their position.
So I wonder how I get the correct distance to the mouse to always move the particles in a certain area and also how to animate the lerp / gravity effect.
That's how you could do it as a first approximation:
body{
overflow: hidden;
margin: 0;
}
<script type="module">
import * as THREE from "https://cdn.skypack.dev/three#0.136.0";
import {OrbitControls} from "https://cdn.skypack.dev/three#0.136.0/examples/jsm/controls/OrbitControls.js";
import * as BufferGeometryUtils from "https://cdn.skypack.dev/three#0.136.0/examples/jsm/utils/BufferGeometryUtils.js";
let scene = new THREE.Scene();
let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 100);
camera.position.set(0, 0, 10);
let renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
let controls = new OrbitControls(camera, renderer.domElement);
let marker = new THREE.Mesh(new THREE.SphereGeometry(0.5, 16, 8), new THREE.MeshBasicMaterial({color: "red", wireframe: true}));
scene.add(marker);
let g = new THREE.IcosahedronGeometry(4, 20);
g = BufferGeometryUtils.mergeVertices(g);
let uniforms = {
mousePos: {value: new THREE.Vector3()}
}
let m = new THREE.PointsMaterial({
size: 0.1,
onBeforeCompile: shader => {
shader.uniforms.mousePos = uniforms.mousePos;
shader.vertexShader = `
uniform vec3 mousePos;
${shader.vertexShader}
`.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
vec3 seg = position - mousePos;
vec3 dir = normalize(seg);
float dist = length(seg);
if (dist < 2.){
float force = clamp(1. / (dist * dist), 0., 1.);
transformed += dir * force;
}
`
);
console.log(shader.vertexShader);
}
});
let p = new THREE.Points(g, m);
scene.add(p);
let clock = new THREE.Clock();
renderer.setAnimationLoop( _ => {
let t = clock.getElapsedTime();
marker.position.x = Math.sin(t * 0.5) * 5;
marker.position.y = Math.cos(t * 0.3) * 5;
uniforms.mousePos.value.copy(marker.position);
renderer.render(scene, camera);
})
</script>
This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
I've been trying to figure this answer out. My questions are
What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?
Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?
Based on your previous quesiton...
First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.
Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.
Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.
As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.
So again it requires a custom shader to keep the points facing the camera.
To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute
Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute
We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.
At draw time you can think of it like this
for each instance
set size to the next value in the size attribute
set color to the next value in the color attribute
set center to the next value in the center attribute
call the vertex shader 6 times, with position and uv set to the nth value in their attributes.
In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.
If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.
Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.
However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.
I have an article written on this topic.
I'm very new to this community. As i'm asking question if there is something i claim not right, please correct me.
Now to the point, i'm design a particle system using Three.js library, particularly i'm using THREE.Geometry() and control the vertex using shader. I want my particle movement restricted inside a box, which means when a particle crosses over a face of the box, it new position will be at the opposite side of that face.
Here's how i approach, in the vertex shader:
uniform float elapsedTime;
void main() {
gl_PointSize = 3.2;
vec3 pos = position;
pos.y -= elapsedTime*2.1;
if( pos.y < -100.0) {
pos.y = 100.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0 );
}
The ellapsedTime is sent from javascript animation loop via uniform. And the y position of each vertex will be update corresponding to the time. As a test, i want if a particle is lower than the bottom plane ( y = -100) it will move to the top plane. That was my plan. And this is the result after they all reach the bottom:
Start to fall
After reach the bottom
So, what am i missing here?
You can achieve it, using mod function:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 300);
var renderer = new THREE.WebGLRenderer({
antialis: true
});
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
var gridTop = new THREE.GridHelper(200, 10);
gridTop.position.y = 100;
var gridBottom = new THREE.GridHelper(200, 10);
gridBottom.position.y = -100;
scene.add(gridTop, gridBottom);
var pts = [];
for (let i = 0; i < 1000; i++) {
pts.push(new THREE.Vector3(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5).multiplyScalar(100));
}
var geom = new THREE.BufferGeometry().setFromPoints(pts);
var mat = new THREE.PointsMaterial({
size: 2,
color: "aqua"
});
var uniforms = {
time: {
value: 0
},
highY: {
value: 100
},
lowY: {
value: -100
}
}
mat.onBeforeCompile = shader => {
shader.uniforms.time = uniforms.time;
shader.uniforms.highY = uniforms.highY;
shader.uniforms.lowY = uniforms.lowY;
console.log(shader.vertexShader);
shader.vertexShader = `
uniform float time;
uniform float highY;
uniform float lowY;
` + shader.vertexShader;
shader.vertexShader = shader.vertexShader.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
float totalY = highY - lowY;
transformed.y = highY - mod(highY - (transformed.y - time * 20.), totalY);
`
);
}
var points = new THREE.Points(geom, mat);
scene.add(points);
var clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
uniforms.time.value = clock.getElapsedTime();
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>
You can not change state in a shader. vertex shaders only output is gl_Position (to generate points/lines/triangles) and varyings that get passed to the fragment shader. Fragment shader's only output is gl_FragColor (in general). So trying to change pos.y will do nothing. The moment the shader exits your change is forgotten.
For your particle code though you could make the position a repeating function of the time
const float duration = 5.0;
float t = fract(elapsedTime / duration);
pos.y = mix(-100.0, 100.0, t);
Assuming elapsedTime is in seconds then pos.y will go from -100 to 100 over 5 seconds and repeat.
Note in this case all the particles will fall at the same time. You could add an attribute to give them each a different time offsets or you could work their position into your own formula. Related to that you might find this article useful.
You could also do the particle movement in JavaScript like this example and this one, updating the positions in the Geometry (or better, BufferGeometry)
Yet another solution is to do the movement in a separate shader by storing the positions in a texture and updating them to a new texture. Then using that texture as input another set of shaders that draws particles.
Im trying to replicate brush stroke with fragment shader. here is my demo. See the stretched edges at the start of animation.
I have this setup with three.js:
material = new THREE.ShaderMaterial( {
side: THREE.DoubleSide,
uniforms: {
time: { type: 'f', value: 0 },
uvRate: {
value: new THREE.Vector2(1,3.7) // aspect ratio of image
},
texture: {
value: THREE.ImageUtils.loadTexture('img/stroke.png')
},
},
vertexShader: vertex,
fragmentShader: fragment
});
plane = new THREE.Mesh(new THREE.PlaneGeometry( 1,1, 1, 1 ),material);
Vertex shader to calculate aspect ratio:
uniform vec2 uvRate1;
void main() {
vUv1 = uv - 0.5;
vUv1 *= uvRate.xy;
vUv1 += 0.5;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}`
And fragment shader with a little math to scale UVs for "brushing"
// math behind this https://www.desmos.com/calculator/8qdmw3a91w
float scale1(float coord,float progress){
coord = coord/progress;
if(coord<0.88) {
final = coord*progress;
} else{
final = pow( (3.*coord - 2.4),3.) + 0.75 + coord/8.;
final *= progress;
}
return final;
}
void main() {
float p = clamp(fract(time/20.) + 0.3, 0.,1.); // 0.3 -> 1.0
vec2 newuv = vUv1;
newuv.x = scale1(vUv1.x,p);
gl_FragColor = texture2D(texture,newuv);
}
But i get this kind of edge:
.
Does somebody know what's the reason behind this? And how to solve it?
It might have something to do with the texture wrapping since it is clamping to the edge by default. ( THREE.ClampToEdgeWrapping )
try changing the texture's wrapS and wrapT properties after initialization, something like.
plane.material.uniforms.texture.wrapS = plane.material.uniforms.texture.wrapT = THREE.RepeatWrapping;
I could not found example of THREE.BufferGeometry with texture coordinates. Is it supposed to be used for textured mesh? I can't get it to work. Here is my test code:
var quad_vertices =
[
-30.0, 30.0, 0.0,
30.0, 30.0, 0.0,
30.0, -30.0, 0.0,
-30.0, -30.0, 0.0
];
var quad_uvs =
[
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0
];
var quad_indices =
[
0, 2, 1, 0, 3, 2
];
var geometry = new THREE.BufferGeometry();
geometry.attributes =
{
position:
{
itemSize: 3,
array: new Float32Array(3 * 4)
},
uv:
{
itemSize: 2,
array: new Float32Array(2 * 4)
},
index:
{
itemSize: 1,
array: new Uint16Array(6)
}
};
var positions = geometry.attributes.position.array;
var uvs = geometry.attributes.uv.array;
var indices = geometry.attributes.index.array;
var i;
for(i = 0; i < positions.length; i += 3)
{
positions[i] = quad_vertices[i];
positions[i + 1] = quad_vertices[i + 1];
positions[i + 2] = quad_vertices[i + 2];
}
for(i = 0; i < uvs.length; i += 2)
{
uvs[i] = quad_uvs[i];
uvs[i + 1] = quad_uvs[i + 1];
}
for(i = 0; i < indices.length; i++)
indices[i] = quad_indices[i];
var texture = THREE.ImageUtils.loadTexture('./assets/texture.png');
texture.anisotropy = renderer.getMaxAnisotropy();
var material = new THREE.MeshBasicMaterial( { map: texture } );
var mesh = new THREE.Mesh(geometry, material);
mesh.position.z = -100;
scene.add(mesh);
Just creating mesh with THREE.Geometry is OK so I have no idea what can be wrong with this code. Any thoughts?
Here is a working example of indexed BufferGeometry with uvs. I updated your example to work with three.js r83. I saw two problems with the old code. First, you can't just set geometry.attributes equal to a JSON object definition. THREE.BufferAttribute is a class, but your JSON is missing the function definitions on its prototype that are required by the THREE.Renderer. Second THREE.ImageUtils has been replaced by THREE.TextureLoader, so I updated that in the example as well.
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth/window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var quad_vertices =
[
-30.0, 30.0, 0.0,
30.0, 30.0, 0.0,
30.0, -30.0, 0.0,
-30.0, -30.0, 0.0
];
var quad_uvs =
[
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0
];
var quad_indices =
[
0, 2, 1, 0, 3, 2
];
var geometry = new THREE.BufferGeometry();
var vertices = new Float32Array( quad_vertices );
// Each vertex has one uv coordinate for texture mapping
var uvs = new Float32Array( quad_uvs);
// Use the four vertices to draw the two triangles that make up the square.
var indices = new Uint32Array( quad_indices )
// itemSize = 3 because there are 3 values (components) per vertex
geometry.addAttribute( 'position', new THREE.BufferAttribute( vertices, 3 ) );
geometry.addAttribute( 'uv', new THREE.BufferAttribute( uvs, 2 ) );
geometry.setIndex( new THREE.BufferAttribute( indices, 1 ) );
// Load the texture asynchronously
var textureLoader = new THREE.TextureLoader();
textureLoader.load('./assets/texture.jpg', function (texture){
console.log('texture loaded');
var material = new THREE.MeshBasicMaterial( {map: texture });
var mesh = new THREE.Mesh( geometry, material );
mesh.position.z = -100;
scene.add(mesh);
renderer.render(scene, camera);
}, undefined, function (err) {
console.error('texture not loaded', err)
});
For further reference:
Creating a scene
BufferAttribute
For those looking to combine an indexed buffer geometry with a texture and a custom shader material (I believe this approaches the upper bound of performance), I used the following approach. All of the real work happens in loadImage() and in the vertex and fragment shaders, the rest is just boilerplate to set up Three.js (version 92):
/**
* Generate a scene object with a background color
**/
function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
return scene;
}
/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/
function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 1000);
camera.position.set(0, 1, 10);
return camera;
}
/**
* Generate the renderer to be used in the scene
**/
function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}
/**
* Generate the controls to be used in the scene
* #param {obj} camera: the three.js camera for the scene
* #param {obj} renderer: the three.js renderer for the scene
**/
function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}
/**
* Load image
**/
function loadImage() {
var geometry = new THREE.BufferGeometry();
/*
Now we need to push some vertices into that geometry to identify the coordinates the geometry should cover
*/
// Identify the image size
var imageSize = {width: 10, height: 7.5};
// Identify the x, y, z coords where the image should be placed
var coords = {x: -5, y: -3.75, z: 0};
// Add one vertex for each corner of the image, using the
// following order: lower left, lower right, upper right, upper left
var vertices = new Float32Array([
coords.x, coords.y, coords.z, // bottom left
coords.x+imageSize.width, coords.y, coords.z, // bottom right
coords.x+imageSize.width, coords.y+imageSize.height, coords.z, // upper right
coords.x, coords.y+imageSize.height, coords.z, // upper left
])
// set the uvs for this box; these identify the following corners:
// lower-left, lower-right, upper-right, upper-left
var uvs = new Float32Array([
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
])
// indices = sequence of index positions in `vertices` to use as vertices
// we make two triangles but only use 4 distinct vertices in the object
// the second argument to THREE.BufferAttribute is the number of elements
// in the first argument per vertex
geometry.setIndex([0,1,2, 2,3,0])
geometry.addAttribute('position', new THREE.BufferAttribute( vertices, 3 ));
geometry.addAttribute('uv', new THREE.BufferAttribute( uvs, 2) )
// Create a texture loader so we can load our image file
var loader = new THREE.TextureLoader();
// specify the url to the texture
var url = 'https://s3.amazonaws.com/duhaime/blog/tsne-webgl/assets/cat.jpg';
// specify custom uniforms and attributes for shaders
// Uniform types: https://github.com/mrdoob/three.js/wiki/Uniforms-types
var material = new THREE.ShaderMaterial({
uniforms: {
texture: {
type: 't',
value: loader.load(url)
},
},
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent
});
// Combine our image geometry and material into a mesh
var mesh = new THREE.Mesh(geometry, material);
// Set the position of the image mesh in the x,y,z dimensions
mesh.position.set(0,0,0)
// Add the image to the scene
scene.add(mesh);
}
/**
* Render!
**/
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};
var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
loadImage();
render();
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/92/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/TrackballControls.js"></script>
<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in the space.
*
* To do so, we can use the following variables defined by Three.js:
*
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* uniform mat4 projectionMatrix - maps camera space into screen space
*
* attribute vec3 position - sets the position of each vertex
*
* attribute vec2 uv - determines the relationship between vertices and textures
*
* `uniforms` are constant across all vertices
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex
*
* `varyings` are values passed from the vertex to the fragment shader
**/
varying vec2 vUv; // pass the uv coordinates of each pixel to the frag shader
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader
**/
precision highp float; // set float precision (optional)
uniform sampler2D texture; // identify the texture as a uniform argument
varying vec2 vUv; // identify the uv values as a varying attribute
void main() {
gl_FragColor = texture2D(texture, vUv);
}
</script>