Best way to paint rectangles in three.js - three.js

EDIT: I solved my problem and this is what is was for. It now uses raw webgl and two triangles for each rectangle.
I'm a seasoned developer, but know next to nothing about 3d development.
I need to animate a million small rectangles where I set the coordinates in Javascript (rather than through a shader). (EDIT: It's a 2D job and I'm looking at webgl for performance reasons only.) I tweaked an existing threejs sample that uses "Points" to modify the coordinates in a BufferGeometry via Javascript and that performs really well, even with a million points.
The three.js concept of "Points", however, is a bit weird in that it appears they have to be squares - my rectangles can't be quite squares though, and they are of slightly different dimensions each.
I can think of a couple of workarounds, such as having foreground-colored squares partially overlap with squares of a background-color, thereby molding them into the correct rectangle. That's quite hacky though.
Another possibility would be to not do it with points but rather with proper triangles; but then I need to set 12 values from Javascript (2 triangles, 3 edges, 2 dimensions) rather than just the needed 4 (x, y, width, height). I suppose that could be improved with a vertex shader somehow, but that will be tricky for a noob like me.
I'm looking for some suggestions or, alternatively, a sample on how to set a large number of vertex coordinates from Javascript in threejs (the existing samples all appear to assume that manipulation is done in shaders, but that doesn't work so well for my use case).
EDIT - Here's a picture of how the rectangles could be laid out:
The rectangle's top and bottom edges are arbitrary, but they are organized into columns of arbitrary widths.
The rectangles of each column all have the same, uniform color.

Just an option with canvas and .map:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 10);
camera.lookAt(scene.position);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var gh = new THREE.GridHelper(10, 10, "black", "black");
gh.rotation.x = Math.PI * 0.5;
gh.position.z = 0.01;
scene.add(gh);
var canvas = document.createElement("canvas");
var map = new THREE.CanvasTexture(canvas);
canvas.width = 512;
canvas.height = 512;
var ctx = canvas.getContext("2d");
ctx.fillStyle = "gray";
ctx.fillRect(0, 0, canvas.width, canvas.height);
function drawRectangle(x, y, width, height, color) {
let xUnit = canvas.width / 10;
let yUnit = canvas.height / 10;
let x_ = x * xUnit;
let y_ = y * yUnit;
let w_ = width * xUnit;
let h_ = height * yUnit;
ctx.fillStyle = color;
ctx.fillRect(x_, y_, w_, h_);
map.needsUpdate = true;
}
drawRectangle(1, 1, 4, 3, "aqua");
drawRectangle(0, 6, 6, 3, "magenta");
drawRectangle(3, 2, 6, 6, "yellow");
var plane = new THREE.Mesh(new THREE.PlaneBufferGeometry(10, 10), new THREE.MeshBasicMaterial({
color: "white",
map: map
}));
scene.add(plane);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>

Read the source for these samples:
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_custom_attributes_particles
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing_billboards
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_points

Related

ThreeJs: Add a Gridhelper which always face the perspective camera

I have a threejs scene view containing a mesh, a perspective camera, and in which I move the camera with OrbitControls.
I need to add a measurement grid on a threejs view which "face" my perspective camera
It works on "start up" with the following code by applying a xRotation of Pi/2 on the grid helper
window.camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.01, 300 );
window.camera.position.z = 150;
window.grid1 = new THREE.GridHelper(500, ~~(500 * 2))
window.grid1.material.transparent = true;
window.grid1.material.depthTest = false;
window.grid1.material.blending = THREE.NormalBlending;
window.grid1.renderOrder = 100;
window.grid1.rotation.x = Math.PI / 2;
window.scene.add(window.grid1);
window.controls = new OrbitControls(window.camera, window.renderer.domElement );
window.controls.target.set( 0, 0.5, 0 );
window.controls.update();
window.controls.enablePan = false;
window.controls.enableDamping = true;
But once i start moving with orbitcontrol the grid helper don't stay align with the camera.
I try to use on the renderLoop
window.grid1.quaternion.copy(window.camera.quaternion);
And
window.grid1.lookAt(window.camera.position)
Which seems to work partially, gridhelper is aligned on the "floor" but not facing the camera
How can I achieve that?
Be gentle I'm starting with threejs :)
This is a bit of a hack, but you could wrap your grid in a THREE.Group and rotate it instead:
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.01, 300 );
camera.position.z = 150;
const grid1 = new THREE.GridHelper(500, ~~(500 * 2));
grid1.material.transparent = true;
grid1.material.depthTest = false;
grid1.material.blending = THREE.NormalBlending;
grid1.renderOrder = 100;
grid1.rotation.x = Math.PI / 2;
const gridGroup = new THREE.Group();
gridGroup.add(grid1);
scene.add(gridGroup);
// ...
And then, in your render loop, you make your group face to the camera (and not the grid):
gridGroup.lookAt(camera.position)
This works because it kind of simulates the behaviour of setting the normal in a THREE.Plane. The GridHelper is rotated to be perpendicular to the camera, and the it is wrapped in a group with no rotation. So by rotating the group, the grid will always be offset so that it is perpendicular to the camera.

How to preserve threejs texture scale while applying texture rotation

I'd like to enable a user to rotate a texture on a rectangle while keeping the aspect ratio of the texture image intact. I'm doing the rotation of a 1:1 aspect ratio image on a surface that is rectangular (say width: 2 and length: 1)
Steps to reproduce:
In the below texture rotation example
https://threejs.org/examples/?q=rotation#webgl_materials_texture_rotation
If we change one of the faces of the geometry like below:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L57
var geometry = new THREE.BoxBufferGeometry( 20, 10, 10 );
Then you can see that as you play around with the rotation control, the image aspect ratio is distorted. (form a square to a weird shape)
At 0 degree:
At some angle between 0 and 90:
I understand that by changing the repeatX and repeatY factor I can control this. It's also easy to see what the values would be at 0 degree, 90 degree rotations.
But I'm struggling to come up with the formula for repeatX and repeatY that works for any texture rotation given length and width of the rectangular face.
Unfortunately when stretching geometry like that, you'll get a distortion in 3D space, not UV space. In this example, one UV.x unit occupies twice as much 3D space as one UV.y unit:
This is giving you those horizontally-skewed diamonds when in between rotations:
Sadly, there's no way to solve this with texture matrix transforms. The horizontal stretching will be applied after the texture transform, in 3D space, so texture.repeat won't help you avoid this. The only way to solve this is by modifying the UVs so the UV.x units take up as much 3D space as UV.y units:
With complex models, you'd do this kind of "equalizing" in a 3D editor, but since the geometry is simple enough, we can do it via code. See the example below. I'm using a width/height ratio variable to use in my UV.y remapping, that way the UV transformations will match up, regardless of how much wider it is.
//////// Boilerplate Three setup
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("canvas")});
const camera = new THREE.PerspectiveCamera(50, 1, 1, 100);
camera.position.z = 3;
const scene = new THREE.Scene();
/////////////////// CREATE GEOM & MATERIAL
const width = 2;
const height = 1;
const ratio= width / height; // <- magic number that will help with UV remapping
const geometry = new THREE.BoxBufferGeometry(width, height, width);
let uvY;
const uvArray = geometry.getAttribute("uv").array;
// Re-map UVs to avoid distortion
for (let i2 = 0; i2 < uvArray.length; i2 += 2){
uvY = uvArray[i2 + 1]; // Extract Y value,
uvY -= 0.5; // center around 0
uvY /= ratio; // divide by w/h ratio
uvY += 0.5; // remove center around 0
uvArray[i2 + 1] = uvY;
}
geometry.getAttribute("uv").needsUpdate = true;
const uvMap = new THREE.TextureLoader().load("https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_opengl.jpg");
// Now we can apply texture transformations as expected
uvMap.center.set(0.5, 0.5);
uvMap.repeat.set(0.25, 0.5);
uvMap.anisotropy = 16;
const material = new THREE.MeshBasicMaterial({map: uvMap});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("resize", resize);
// Add rotation on mousemove
function onMouseMove(ev) {
uvMap.rotation = (ev.clientX / window.innerWidth) * Math.PI * 2;
}
function resize() {
const width = window.innerWidth;
const height = window.innerHeight;
renderer.setSize(width, height);
camera.aspect = width / height;
camera.updateProjectionMatrix();
}
function animate(time) {
mesh.rotation.y = Math.cos(time/ 3000) * 2;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
resize();
requestAnimationFrame(animate);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://threejs.org/build/three.js"></script>
<canvas></canvas>
First of all, I agree with the solution #Marquizzo provided to your problem. And setting UV explicitly to the geometry should be the easiest way to solve your problem.
But #Marquizzo did not answer why changing the matrix of the texture (set repeatX and repeatY) does not work.
We all know the 2D rotation matrix R
cos -sin
sin cos
UVs are calculated in the shader with a transform matrix T, which is the texture matrix from your question.
T * UV = new UV
To simplify the question, we only consider rotation. And assume we have another additional matrix X for calculating the new UV. Then we have
X * R * UV = new UV
The question now is whether we can find a solution ofX, so that with any rotation, new UV of any points in your question can be calculated correctly. If there is a solution of X, then we can simply use
var X = new Matrix3();
//X.set(x,y,z,...)
texture.matrix.premultiply(X);
Otherwise, we can't find the approach you expected.
Let's create several equations to figure out X.
In the pic below, ABCD is one face of your geometry, and the transparent green is the texture. The UV of point A is (0,1), point B is (0,0), and (1,0), (1,1) for C and D respectively.
The first equation comes from the consideration, without any rotation, the original UV should never be changed (UV for A is always (0,1)). So we should have
X * I * (0, 1) = (0, 1) // I is the identity matrix
From here we can see X should also be an identity matrix.
Then let's see whether the identity matrix X can satisfy the second equation. What's the second equation? Simplify again, let B be the rotation centre(origin) and rotate the texture 90 degrees(counterclockwise). We use -90 to calculate UV though we rotate 90 degrees.
The new UV for point A after rotating the texture 90 degrees should be the current value of E. The value of E is (a/b, 0). Then we have
From this equation we can see X should not be an identity matrix, which means, WE ARE NOT ABLE TO FIND A SOLUTION OF X TO SOLVE YOUR PROBLEM WITH
X * R * UV = new UV
Certainly, you can change the shader of calculating new UVs, but it's even harder than the way #Marquizzo provided.

Three.js raycaster intersection with sprites is completely off to the left

I have sprites with text on screen, placed in spherical pattern and I want to allow user to click on individual words and highlight them.
Now the problem is that when I do raycasting and do raycaster.intersectObjects() it returns sprites that are somewhere completely different where the click happened (usually it highlights words that are to the left of the words clicked). For debugging purposes I actually drew the rays from the raycaster object, and they are pretty much going trough the words I clicked.
In this picture I clicked the words "emma", "universe" and "inspector legrasse", but the words that got highlighted are in the red, I also rotated the camera so we can see the lines.
here is the relevant code:
Sprite creation:
var canvas = document.createElement('canvas');
var context = canvas.getContext('2d');
canvas.height = 256;
canvas.width = 1024;
...
context.fillStyle = "rgba(0, 0, 0, 1.0)";
context.fillText(message, borderThickness, fontsize + borderThickness);
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
var spriteMaterial = new THREE.SpriteMaterial({map: texture});
var sprite = new THREE.Sprite(spriteMaterial);
sprite.scale.set(400, 100, 1.0);
Individual sprites are then added to "sprites" array, and then added to the scene.
Click detection:
function clickOnSprite(event) {
mouse.x = ( event.clientX / renderer.domElement.clientWidth ) * 2 - 1;
mouse.y = -( event.clientY / renderer.domElement.clientHeight ) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
drawRaycastLine(raycaster);
var intersects = raycaster.intersectObjects(sprites);
if (intersects.length > 0) {
intersects[0].object.material.color.set(0xff0000);
console.log(intersects[0]);
}
}
I am using perspective camera with orbit controls.
I came to the conclusion, that this cannot be currently done with sprites (at least not currently).
As suggested in the comments, I ended up using plane geometries instead of sprites.

Three.js cube face rotation vector in relation to camera

I have a rotating sphere on which I have a div attached the example can be viewed here: https://jsfiddle.net/ao5wdm04/
I calculate the x and y values and place the div using a translate3d transform and that works quite well.
My question is how to can get the values for the rotateX, rotateY and rotateZ or rotate3d transforms so the div "tangents" the sphere surface. I know the cube mesh faces the sphere center so I assume the rotation vector of the outward facing normal vector in relation to the camera would contain the values I need. But I'm not quite sure how to obtain these.
Update
By using Euler angles I'm almost achieving the desired effect, shown here: https://jsfiddle.net/ao5wdm04/1/ but the rotation is not large enough.
Disclaimer: I know nothing about three.js. I've just done a bit of OpenGL.
Your euler angles are coming from a model-view-projected origin (lines 74-80). I can't see the logic behind this.
If your div is on the sphere surface, then it should be oriented by the normal of the sphere at the location of the div. Fortunately, you already have these angles. They are named rotation.
If you replace the euler angles in lines 82-84 with the rotation angles used to position the div, then in my browser the div appears edge on when it is at the edge of the circle, and face on when it is at the centre. It kind of looks like it is moving in a circle, edge on to the screen. Is this the effect you want?
My modification to the linked code:
82 var rotX = (rotation.x * (180/ Math.PI));
83 var rotY = (rotation.y * (180/ Math.PI));
84 var rotZ = 0;
EDIT
Ah, ok. The rotation variable is just that of the camera. It governs the tangent at the equator. You also need to modify the orientation to account for latitude.
Make rotY equal to negative your latitude. Then make sure that this rotation happens before the equatorial rotation. Rotations are not commutative.
In summary, changes from the code at https://jsfiddle.net/ao5wdm04/1/ are as follows:
27 var lat = 45 * Math.PI / 180;
...
82 var rotX = (rotation.x * (180/ Math.PI));
83 var rotY = - 45;
...
88 document.getElementById('face').style.webkitTransform = 'translate3d(' + x + 'px,' + y + 'px,0px) rotateY('+rotX+'deg) rotateX('+rotY+'deg)';
I don't know how the latitude should propagate between the init and render functions. As I said, I'm not familiar with the language.
For details about transformation and rotation in openGL or any other graphics please go through here.
Basic -
There is basically 2 kind of transformations in 3D world-
Translation
Rotation
A small example on this things are given here.
If u go through all of them, I think u have a clear concept on the 3D transformation system.
If u can understand those, u can easily simulate that :) because u need to do this 2 things for each move at the same time.
Try this code-
var camera, scene, renderer, raycaster, geometry, material, mesh, box;
var rotation = {
x: 0,
y: 0
};
var distance = 500;
init();
animate();
function init() {
raycaster = new THREE.Raycaster(); ;
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000);
camera.position.z = distance;
camera.position.y = 100;
scene.add(camera);
geometry = new THREE.SphereGeometry(100, 50, 50, 50);
material = new THREE.MeshNormalMaterial();
mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
var transform = new THREE.Matrix4().getInverse(scene.matrix);
var lat = 0 * Math.PI / 180;
var lon = 90 * Math.PI / 180;
var r = 100;
var p = new THREE.Vector3(-r * Math.cos(lat) * Math.cos(lon),
r * Math.sin(lat),
r * Math.cos(lat) * Math.sin(lon)
);
p.applyMatrix4(transform);
var geometry = new THREE.CubeGeometry(10, 10, 10);
box = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial({
color: 0xff0000,
}));
box.position.set(p.x, p.y, p.z);
box.lookAt(mesh.position);
//scene.add(box);
box.updateMatrix();
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
rotation.x += 0.01;
camera.position.x = distance * Math.sin(rotation.x) * Math.cos(rotation.y);
camera.position.y = distance * Math.sin(rotation.y);
camera.position.z = distance * Math.cos(rotation.x) * Math.cos(rotation.y);
camera.lookAt(mesh.position);
var w = window.innerWidth;
var h = window.innerHeight;
var mat = new THREE.Matrix4();
var v = new THREE.Vector3();
mat.copy(scene.matrix);
mat.multiply(box.matrix);
v.set(0, 0, 0);
v.applyMatrix4(mat);
v.project(camera);
var euler = new THREE.Euler().setFromVector3(v);
var rotX = (rotation.x * (180/ Math.PI));
var rotY = (rotation.y * (180/ Math.PI));
var rotZ = 0;
var x = (w * (v.x + 1) / 2) - 12.5; //compensate the box size
var y = (h - h * (v.y + 1) / 2) - 12.5;
document.getElementById('face').style.webkitTransform = 'translate3d(' + x + 'px,' + y + 'px,0px) rotateX('+rotY+'deg) rotateY('+rotX+'deg) rotateZ('+rotZ+'deg)';
renderer.render(scene, camera);
}
#face {
position: absolute;
width: 25px;
height: 25px;
border-radius: 50%;
background-color: red;
}
<script src="https://rawgit.com/mrdoob/three.js/master/build/three.min.js"></script>
<div id="face"></div>

Rendering a large number of colored particles using three.js and the canvas renderer

I am trying to use the Three.js library to display a large number of colored points on the screen (about half a million to million for example). I am trying to use the Canvas renderer rather than the WebGL renderer if possible (The web pages would also be displayed in the Google Earth Client bubbles, which seems to work with Canvas renderer but not the WebGL renderer.)
While I have the problem solved for a small number of points (tens of thousands) by modifying the code from here, I am having trouble scaling it beyond that.
But in the the following code using WebGL and the Particle System I can render half a million random points, but without colors.
...
var particles = new THREE.Geometry();
var pMaterial = new THREE.ParticleBasicMaterial({
color: 0xFFFFFF,
size: 1,
sizeAttenuation : false
});
// now create the individual particles
for (var p = 0; p < particleCount; p++) {
// create a particle with randon position values,
// -250 -> 250
var pX = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pY = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pZ = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
particle = new THREE.Vertex(
new THREE.Vector3(pX, pY, pZ)
);
// add it to the geometry
particles.vertices.push(particle);
}
var particleSystem = new THREE.ParticleSystem(
particles, pMaterial);
scene.add(particleSystem);
...
Is the reason for the better performance of the above code due to the Particle System? From what I have read in the documentation it seems the Particle System can only be used by the WebGL renderer.
So my question(s) are
a) Can I render such large number of particles using the Canvas renderer or is it always going to be slower than the WebGL/ParticleSystem version? If so, how do I go about doing that? What objects and or tricks do I use to improve performance?
b) Is there a compromise I can reach if I give up some features? In other words, can I still use the Canvas renderer for the large dataset if I give up the need to color the individual points?
c) If I have to give up the Canvas and use the WebGL version, is it possible to change the colors of the individual points? It seems the color is set by the material passed to the ParticleSystem and that sets the color for all the points.
EDIT: ParticleSystem and PointCloud has been renamed to Points. In addition, ParticleBasicMaterial and PointCloudMaterial has been renamed to PointsMaterial.
This answer only applies to versions of three.js prior to r.125.
To have a different color for each particle, you need to have a color array as a property of the geometry, and then set vertexColors to THREE.VertexColors in the material, like so:
// vertex colors
var colors = [];
for( var i = 0; i < geometry.vertices.length; i++ ) {
// random color
colors[i] = new THREE.Color();
colors[i].setHSL( Math.random(), 1.0, 0.5 );
}
geometry.colors = colors;
// material
material = new THREE.PointsMaterial( {
size: 10,
transparent: true,
opacity: 0.7,
vertexColors: THREE.VertexColors
} );
// point cloud
pointCloud = new THREE.Points( geometry, material );
Your other questions are a little too general for me to answer, and besides, it depends on exactly what you are trying to do and what your requirements are. Yes, you can expect Canvas to be slower.
EDIT: Updated for three.js r.124

Resources