I'm working on a voxel game with three.js. For this, I need to generate as many chunk as required to fill the screen. Currently, I'm loading a circle of radius 20 around the player.
What is the simplest way to compute the exact range of chunks required to fill the camera frustrum and avoid computing invisible chunks ?
Every chunk has the exact same size (let's say we have a vector size with the correct value), and are at Y=0 (X and Z varying).
var frustum = new THREE.Frustum();
frustum.setFromMatrix( new THREE.Matrix4().multiply( camera.projectionMatrix, camera.matrixWorldInverse ) );
for (var i=0; i<objects.length; i++) {
objects[i].visible = frustum.intersectsObject( objects[i] );
}
only objects that are within camera frustum will be rendered
documented here
hope this helps you?
Related
after loading several gltf files, I am renaming these files and try to reposition the camera so that it is centered and looking at the centroid of the new objects and the whole scene fits within the camera.
But the centering does not always work, sometimes the centroid is calculated somewhere completely different. The following code is ran in render() only once after all objects have been loaded:
var all_centers = [];
scene.updateMatrixWorld();
scene.traverse(function(child){
if (child instanceof THREE.Mesh){
if (child.name.indexOf("_") !== -1){ // the newly imported objects
child.geometry.computeBoundingSphere();
var the_center = new THREE.Vector3();
child.getWorldPosition(the_center);
all_centers.push(the_center);
}
}
});
var the_centroid = getPointsCentroid(all_centers);
var cameraPosition = new THREE.Vector3(the_centroid.x,the_centroid.y,-55);
camera.position.copy(cameraPosition);
camera.lookAt(the_centroid);
and here is the function for the centroid:
function getPointsCentroid(points){
var centroid = [0.,0.,0.];
for(var i = 0; i < points.length; i++) {
var point = points[i];
centroid[0] += point.x;
centroid[1] += point.y;
centroid[2] += point.z;
}
centroid[0] /= points.length;
centroid[1] /= points.length;
centroid[2] /= points.length;
return new THREE.Vector3(centroid[0],centroid[1],centroid[2]);
}
For now ignoring the problem of getting the whole scene to fit within the camera (this is a common problem, I'm sure you can find useful information online (perhaps this?)).
Instead, let us focus on what seems to be your main question: You wish to find the center of a group of objects. What you are currently doing is you are computing an average of the object centers. This means that if you have one object to the far left, and 9 objects to the far right, your computed center point will also be far to the right. (This would be an approximation of the center of mass, assuming the objects are of similar mass.)
However, for the purpose of centering the camera so that every object is visible, you are not interested in the center of mass, but you wish to find a point such that the distance to the leftmost point is equal to the distance to the rightmost, and similarly the lowermost to the highermost, etc. Such a point can be found using the bounding box of all your objects. The center of this bounding box is the point you are looking for.
If your camera is to be aligned to the axes, you can easily compute such a bounding box for each object as follows:
new THREE.Box3().setFromObject(myMesh)
The bounding box of all the objects is simply the box represented by the lowest and highest coordinates of all the object bounding boxes you computed. The center of the complete bounding box will give you the point you are after.
Next, assuming the camera is aligned with the axes, the problem is simply finding a suitable distance from the camera to this point so that the entire bounding box fits inside the viewport.
I want to have a DOM node track a particle in my THREE.js simulation. My simulation is built with the Points object, using a bufferGeometry. I'm setting the positions of each vertex in the render loop. Over the course of the simulation I'm moving / rotating both the camera and the Points object (through its parent Object3d).
I can't figure out how to get reliable screen coordinates for any of my particles. I've followed the instructions on other questions, like Three.JS: Get position of rotated object, and Converting World coordinates to Screen coordinates in Three.js using Projection, but none of them seem to work for me. At this point I can see that the calculated projections of the vertices are changing with my camera movements and object rotations, but not in a way that I can actually map to the screen. Also, sometimes two particles that neighbor each other on the screen will yield wildly different projected positions.
Here's my latest attempt:
const { x, y, z } = layout.getNodePosition(nodes[nodeHoverTarget].id)
var m = camera.matrixWorldInverse.clone()
var mw = points.matrixWorld.clone()
var p = camera.projectionMatrix.clone()
var modelViewMatrix = m.multiply(mw)
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(p.multiply(modelViewMatrix))
console.log(projectedPosition)
Essentially I've replicated the operations in my shader to derive gl_Position.
projectedPosition is where I'd like to store the screen coordinates.
I'm sorry if I've missed something obvious... I've tried a lot of things but so far nothing has worked :/
Thanks in advance for any help.
I figured it out...
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(points.matrixWorld).project(camera)
I'm trying to plot country names of the globe, so the text meshes will be aligned with the surface, but I'm failing to calculate proper rotations. For text I'm using THREE.TextGeometry. The name appears on the click of the mesh of the country at the point of intersection using raycasting. I'm lacking knowledge of how to turn these coordinates to proper rotation angles. I'm not posting my code, as it's complete mess and I believe for a knowldgeable person will be easier to explain how to achieve this in general.
Here is desired result:
The other solution, which I tried (and which, of course, is not the ultimate), based on this SO answer. The idea is to use the normal of the face you intersect with the raycaster.
Obtain the point of intersection.
Obtain the face of intersection.
Obtain the normal of the face (2).
Get the normal (3) in world coordinates.
Set position of the text object as sum of point of intersection (1) and the normal in world coordinates (4).
Set lookAt() vector of the text object as sum of its position (5) and the normal in world coordinates (4).
Seems long, but actually it makes not so much of code:
var PGHelper = new THREE.PolarGridHelper(...); // let's imagine it's your text object ;)
var PGlookAt = new THREE.Vector3(); // point of lookAt for the "text" object
var normalMatrix = new THREE.Matrix3();
var worldNormal = new THREE.Vector3();
and in the animation loop:
for ( var i = 0; i < intersects.length; i++ ) {
normalMatrix.getNormalMatrix( intersects[i].object.matrixWorld );
worldNormal.copy(intersects[i].face.normal).applyMatrix3( normalMatrix ).normalize();
PGHelper.position.addVectors(intersects[i].point, worldNormal);
PGlookAt.addVectors(PGHelper.position, worldNormal);
PGHelper.lookAt(PGlookAt);
}
jsfiddle exmaple
The method works with meshes of any geometry (checked with spheres and boxes though ;) ). And I'm sure there are another better methods.
very interesting question.I have tried this way, we can regard the text as a plane. lets define a normal vector n from your sphere center(or position) to point on the sphere surface where you want to display text. I have a simple way to make normal vector right.
1. put the text mesh on sphere center. text.position.copy(sphere.position)
2. make text to the point on sphere surface, text.lookAt(point)
3.relocate text to the point. text.position.copy(point)
I'm building a boardgame in WebGL. The board can be rotated/zoomed. I need a way to translate a click on the canvas element (x,y) into the relevant point in 3D space (x, y, z). The ultimate result is that I want to know the (x, y, z) coordinate that contains the point that touches the object closest to the user. For instance, the user clicks a piece, and you imagine a ray traveling through 3D space that goes through both the piece and the game board, but I want the (x, y, z) coord of the piece at the point where it was touched.
I feel like this must be a very common problem, but I can't seem to find a solution in my googles. There must be some way to project the current view of the 3D space into 2D so you can map each point in 2D space to the relevant point in 3D space. I want to the user to be able to mouse over a space on the board, and have the spot change color.
You're looking for an unproject function, which converts screen coordinates into a ray cast from the camera position into the 3D world. You must then perform ray/triangle intersection tests to find the closest triangle to the camera which also intersects the ray.
I have an example of unprojecting available at jax/camera.js#L568 -- but you'll still need to implement ray/triangle intersection. I have an implementation of that at jax/triangle.js#L113.
There is a simpler and (usually) faster alternative, however, called 'picking'. Use this if you want to select an entire object (for instance, a chess piece), and if you don't care about where the mouse actually clicked. The WebGL way to do this is to render the entire scene in various shades of blue (the blue is a key, while red and green are used for unique IDs of the objects in the scene) to a texture, then read back a pixel from that texture. Decoding the RGB into the object's ID will give you the object that was clicked. Again, I've implemented this and it's available at jax/world.js#L82. (See also lines 146, 162, 175.)
Both approaches have pros and cons (discussed here and in some of the comments after) and you'll need to figure out which approach best serves your needs. Picking is slower with huge scenes, but unprojecting in pure JS is extremely slow (since JS itself isn't all that fast) so my best recommendation would be to experiment with both.
FYI, you could also look at the GLU project and unproject code, which I based my code loosely upon: http://www.opengl.org/wiki/GluProject_and_gluUnProject_code
I'm working on this problem at the moment - the approach I'm taking is
Render objects to pick buffer each with unique colour
Read buffer pixel, map back to picked object
Render picked object to buffer with each pixel colour a function of Z-depth
Read buffer pixel, map back to Z-depth
We have picked object and approximate Z for the pick coords
This is the working demo
function onMouseUp(event) {
event.preventDefault();
x_pos = (event.clientX / window.innerWidth) * 2 - 1;
y_pos = -(event.clientY / window.innerHeight) * 2 + 1;
z_pos = 0.5;
var vector = new THREE.Vector3( x_pos , y_pos , z_pos );
var projector = new THREE.Projector();
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster(camera.position, vector.sub(camera.position).normalize());
var intersects = raycaster.intersectObjects(intersectObjects);
if (intersects.length > 0) {
xp = intersects[0].point.x.toFixed(2);
yp = intersects[0].point.y.toFixed(2);
zp = intersects[0].point.z.toFixed(2);
destination = new THREE.Vector3( xp , yp , zp );
radians = Math.atan2( ( driller.position.x - xp) , (driller.position.z - zp));
radians += 90 * (Math.PI / 180);
console.log(radians);
var tween = new TWEEN.Tween(driller.rotation).to({ y : radians },200).easing(TWEEN.Easing.Linear.None).start();
}
weissner-doors.de/drone/
culted from one of the threads.
not sure about (x,y,z) but you can get the canvas(x,y) using
getBoundingClientRect()
function getCanvasCoord(){
var mx = event.clientX;
var my = event.clientY;
var canvas = document.getElementById('canvasId');
var rect = canvas.getBoundingClientRect();// check if your browser supports this
mx = mx - rect.left;
my = my - rect.top;
return {x: mx , y: my};
}
I'm trying to set the ProjectionMatrix of a Three.js Perspective Camera to match a projection Matrix I calculated with a different program.
So I set the camera's position and rotation like this:
self.camera.position.x = 0;
self.camera.position.y = 0;
self.camera.position.z = 142 ;
self.camera.rotation.x = 0.0;// -0.032
self.camera.rotation.y = 0.0;
self.camera.rotation.z = 0;
Next I created a 4x4 Matrix (called Matrix4 in Three.js) like this:
var projectionMatrix = new THREE.Matrix4(-1426.149, -145.7176, -523.0170, 225.07519, -42.40711, -1463.2367, -23.6839, 524.3322, -0.0174, -0.11928, -0.99270, 0.43826, 0, 0, 0, 1);
and changed the camera's projection Matrix entries like this:
for ( var i = 0; i < 16; i++) {
self.camera.projectionMatrix.elements[i] = projectionMatrix.elements[i];
}
when I now render the scene I just get a black screen and can't see any of the objects I inserted. Turning the angle of the Camera doesn't help either. I still can't see any objects.
If I insert a
self.camera.updateProjectionMatrix();
after setting the camera's projection Matrix to the values of my projectionMatrix the camera is set back to the original Position (x=0,y=0,z=142 and looking at the origin where I created some objects) and the values I set in the camera's matrix seem to have been overwritten. I checked that by printing the cameras projection Matrix to the console. If I do not call the updateProjectionMatrix() function the values stay as I set them.
Does somebody have an idea how to solve this problem?
If I do not call the updateProjectionMatrix() function the values stay as I set them.
Correct, updateProjectionMatrix() calculates those 16 numbers you pasted in your projection matrix based on a bunch of parameters. Those parameters are, the position and rotation you set above, plus the parameters you passed (or default) for the camera. (these actually make the matrixWorld and its inverse.
In case of a perspective camera, you don't have much - near, far, fov and aspect. Left,right,top,bottom are derived from these, with an orthographic camera you set them directly. These are then used to compose the projection matrix.
Scratch a pixel has a REALLY good tutorial on this subject. The next lesson on the openGL projection matrix is actually more relevant to WebGL. left right top and bottom are made from your FOV and your aspect ratio. Add near and far and you've got yourself a projection matrix.
Now, in order for this thing to work, you either have to know what you're doing, or get really lucky. Pasting these numbers from somewhere else and getting it to work is short of winning the lottery. Best case scenario, you can have your scale all wrong and clipping your scene. Worst case, you've mixed a completely different matrix, different XYZ convention, and there's no way you'll get it to work, or at least make sense.
Out of curiosity, what are you trying to do? Are you trying to match your camera to a camera from somewhere else?