How to click an object in THREE.js - three.js

I'm working my way through this book, and I'm doing okay I guess, but I've hit something I do not really get.
Below is how you can log to the console and object in 3D space that you click on:
renderer.domElement.addEventListener('mousedown', function(event) {
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster(
camera.position,
vector.sub(camera.position).normalize()
);
var intersects = raycaster.intersectObjects(OBJECTS);
if (intersects.length) {
console.log(intersects[0]);
}
}, false);
Here's the book's explanation on how this code works:
The previous code listens to the mousedown event on the renderer's canvas.
Get that, we're finding the domElement the renderer is using by using renderer.domElement. We're then binding an event listener to it with addEventListner, and specifing we want to listening for a mousedown. When the mouse is clicked, we launch an anonymous function and pass the eventvariable into the function.
Then,
it creates a new Vector3 instance with the mouse's coordinates on the screen
relative to the center of the canvas as a percent of the canvas width.
What? I get how we're creating a new instance with new THREE.Vector3, and I get that the three arguments Vector3 takes are its x, y and z coordinates, but that's where my understanding completely and utterly breaks down.
Firstly, I'm making an assumption here, but to plot a vector, surely you need two points in space in order to project? If you give it just one set of coords, how does it know what direction to project from? My guess is that you actually use the Raycaster to plot the "vector"...
Now onto the arguments we're passing to Vector3... I get how z is 0. Because we're only interested in where we're clicking on the screen. We can either click up or down, left or right, but not into or out of the screen, so we set that to zero. Now let's tackle x:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
We're getting the PixelRatio of the device, timsing it by where we clicked along the x axis, dividing by the renderer's domElement width, timsing this by two and taking away one.
When you don't get something, you need to say what you do get so people can best help you out. So I feel like such a fool when I say:
I don't get why we even need the pixel ratio I don't get why we times that by where we've clicked along the x
I don't get why we divide that by the width
I utterly do not get why we need to times by 2 and take away 1. Times by 2, take away 1. That could genuinely could be times by an elephant, take away peanut and it would make as much sense.
I get y even less:
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
Why are we now randomly using -devicePixelRatio? Why are now deciding to add one rather than minus one?
That vector is then un-projected (from 2D into 3D space) relative to the camera.
What?
Once we have the point in 3D space representing the mouse's location,
we draw a line to it using the Raycaster. The two arguments that it
receives are the starting point and the direction to the ending point.
Okay, I get that, it's what I was mentioning above. How we need two points to plot a "vector". In THREE talk, a vector appears to be called a "raycaster".
However, the two points we're passing to it as arguments don't make much sense. If we were passing in the camera's position and the vector's position and drawing the projection from those two points I'd get that, and indeed we are using the camera.position for the first points, but
vector.sub(camera.position).normalize()
Why are we subtracting the camera.position? Why are we normalizing? Why does this useless f***** book not think to explain anything?
We get the direction by subtracting the mouse and camera positions and
then normalizing the result, which divides each dimension by the
length of the vector to scale it so that no dimension has a value
greater than 1.
What? I'm not being lazy, not a word past by makes sense here.
Finally, we use the ray to check which objects are located in the
given direction (that is, under the mouse) with the intersectObjects
method. OBJECTS is an array of objects (generally meshes) to check; be
sure to change it appropriately for your code. An array of objects
that are behind the mouse are returned and sorted by distance, so the
first result is the object that was clicked. Each object in the
intersects array has an object, point, face, and distance property.
Respectively, the values of these properties are the clicked object
(generally a Mesh), a Vector3 instance representing the clicked
location in space, the Face3 instance at the clicked location, and the
distance from the camera to the clicked point.
I get that. We grab all the objects the vector passes through, put them to an array in distance order and log the first one, i.e. the nearest one:
console.log(intersects[0]);
And, in all honestly, do you think I should give up with THREE? I mean, I've gotten somewhere with it certainly, and I understand all the programming aspects of it, creating new instances, using data objects such as arrays, using anonymous functions and passing in variables, but whenever I hit something mathematical I seem to grind to a soul-crushing halt.
Or is this actually difficult? Did you find this tricky? It's just the book doesn't feel it's necessary to explain in much detail, and neither do other answers , as though this stuff is just normal for most people. I feel like such an idiot. Should I give up? I want to create 3D games. I really, really want to, but I am drawn to the poetic idea of creating an entire world. Not math. If I said I didn't find math difficult, I would be lying.

I understand your troubles and I'm here to help. It seems you have one principal question: what operations are performed on the vector to prepare it for click detection?
Let's look back at the original declaration of vector:
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
renderer.devicePixelRatio relates to a ratio of virtual site pixels /
real device pixels
event.pageX and .pageY are mouseX, mouseY
The this context is renderer.domElement, so .width, .height, .offsetLeft/Right relate to that
1 appears to be a corrective "magic" number for the calculation (for the purpose of being as visually exact as possible)
We don't care about the z-value, THREE will handle that for us. X and Y are our chief concern. Let's derive them:
We first find the distance of the mouse to the edge of the canvas: event.pageX - this.offsetLeft
We divide that by this.width to get the mouseX as a percentage of the screen width
We multiply by renderer.devicePixelRatio to convert from device pixels to site pixels
I'm not sure why we multiply by 2, but it might have to do with an assumption that the user has a retina display (someone can feel free to correct me on this if it's wrong).
1 is, again, magic to fix what might be just an offset error
For y, we multiply the whole expression by -1 to compensate for the inverted coordinate system (0 is top, this.height is bottom)
Thus you get the following arguments for the vector:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
Now, for the next bit, a few terms:
Normalizing a vector means simplifying it into x, y, and z components less than one. To do so, you simply divide the x, y, and z components of the vector by the magnitude of the vector. It seems useless, but it's important because it creates a unit vector (magnitude = 1) in the direction of the mouse vector!
A Raycaster casts a vector through the 3D landscape produced in the canvas. Its constructor is THREE.Raycaster( origin, direction )
With these terms in mind, I can explain why we do this: vector.sub(camera.position).normalize(). First, we get the vector describing the distance from the mouse position vector to the camera position vector, vector.sub(camera.position). Then, we normalize it to make it a direction vector (again, magnitude = 1). This way, we're casting a vector from the camera to the 3D space in the direction of the mouse position! This operation allows us to then figure out any objects that are under the mouse by comparing the object position to the ray's vector.
I hope this helps. If you have any more questions, feel free to comment and I will answer them as soon as possible.
Oh, and don't let the math discourage you. THREE.js is by nature a math-heavy language because you're manipulating objects in 3D space, but experience will help you get past these kinds of understanding roadblocks. I would continue learning and return to Stack Overflow with your questions. It may take some time to develop an aptitude for the math, but you won't learn if you don't try!

This is more universal no matter the render dom location, and the dom and its ancesters's padding margin.
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
here is a demo, scroll to the bottom to click the cube.
<!DOCTYPE html>
<html>
<head>
<script src="http://threejs.org/build/three.min.js"></script>
<link rel="stylesheet" href="http://libs.baidu.com/bootstrap/3.0.3/css/bootstrap.min.css" />
<style>
body {
font-family: Monospace;
background-color: #fff;
margin: 0px;
}
#canvas {
background-color: #000;
width: 200px;
height: 200px;
border: 1px solid black;
margin: 10px;
padding: 0px;
top: 10px;
left: 100px;
}
.border {
padding:10px;
margin:10px;
height:3000px;
overflow:scroll;
}
</style>
</head>
<body>
<div class="border">
<div style="min-height:1000px;"></div>
<div class="border">
<div id="canvas"></div>
</div>
</div>
<script>
// Three.js ray.intersects with offset canvas
var container, camera, scene, renderer, mesh,
objects = [],
count = 0,
CANVAS_WIDTH = 200,
CANVAS_HEIGHT = 200;
// info
info = document.createElement( 'div' );
info.style.position = 'absolute';
info.style.top = '30px';
info.style.width = '100%';
info.style.textAlign = 'center';
info.style.color = '#f00';
info.style.backgroundColor = 'transparent';
info.style.zIndex = '1';
info.style.fontFamily = 'Monospace';
info.innerHTML = 'INTERSECT Count: ' + count;
info.style.userSelect = "none";
info.style.webkitUserSelect = "none";
info.style.MozUserSelect = "none";
document.body.appendChild( info );
container = document.getElementById( 'canvas' );
renderer = new THREE.WebGLRenderer();
renderer.setSize( CANVAS_WIDTH, CANVAS_HEIGHT );
container.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, CANVAS_WIDTH / CANVAS_HEIGHT, 1, 1000 );
camera.position.y = 250;
camera.position.z = 500;
camera.lookAt( scene.position );
scene.add( camera );
scene.add( new THREE.AmbientLight( 0x222222 ) );
var light = new THREE.PointLight( 0xffffff, 1 );
camera.add( light );
mesh = new THREE.Mesh(
new THREE.BoxGeometry( 200, 200, 200, 1, 1, 1 ),
new THREE.MeshPhongMaterial( { color : 0x0080ff }
) );
scene.add( mesh );
objects.push( mesh );
// find intersections
var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();
// mouse listener
document.addEventListener( 'mousedown', function( event ) {
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
raycaster.setFromCamera( mouse, camera );
intersects = raycaster.intersectObjects( objects );
if ( intersects.length > 0 ) {
info.innerHTML = 'INTERSECT Count: ' + ++count;
}
}, false );
function render() {
mesh.rotation.y += 0.01;
renderer.render( scene, camera );
}
(function animate() {
requestAnimationFrame( animate );
render();
})();
</script>
</body>
</html>

Related

Does the point coordinate in three.js change if the camera moves?

I'm using the raycaster function to get the coordinates of portions of a texture as a preliminary to creating areas that will link to other portions of my website. The model I'm using is hollow and I'm raycasting to the intersection with the skin of the model from a point on the interior. I've used the standard technique suggested here and elsewhere to determine the coordinates in 3d space from mouse position:
//1. sets the mouse position with a coordinate system where the center
// of the screen is the origin
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
console.log("mouse position: (" + mouse.x + ", "+ mouse.y + ")");
//2. set the picking ray from the camera position and mouse coordinates
raycaster.setFromCamera( mouse, camera );
//3. compute intersections
var intersects = raycaster.intersectObjects( scene.children, true );
var intersect = null;
var point = null;
//console.log(intersects);
for ( var i = 0; i < intersects.length; i++ ) {
console.log(intersects[i]);
if (i = intersects.length - 1) {
intersect = intersects[ i ];
point = intersect[ "point" ];
}
This works, but I'm getting inconsistent results if the camera position changes. My assumption right now is that this is because the mouse coordinates are generated from the center of the screen and that center has changed since I've moved the camera position. I know that getWorldPosition should stay consistent regardless of camera movement, but trying to call point.getWorldPosition returns "undefined" as a result. Is my thinking about why my results are inconsistent correct, and if so and I'm right that getWorldPosition is what I'm looking for how do I go about calling it so I can get the proper xyz coordinates for my intersect?
EDITED TO ADD:
When I target what should be the same point (or close to) on the screen I get very different results.
For example, this is my model (and forgive the janky code under the hood -- I'm still working on it):
http://www.minorworksoflydgate.net/Model/three/examples/clopton_chapel_dev.html
Hitting the upper left corner of the first panel of writing on the opposite wall (so the spot marked with the x in the picture) gets these results (you can capture them within that model by hitting C, escaping out of the pointerlock, and viewing in the console) with the camera at 0,0,0:
x: -0.1947601252025508,
​
y: 0.15833788110908806,
​
z: -0.1643094916216681
If I move in the space (so with a camera position of x: -6.140427450769398, y: 1.9021520960972597e-14, z: -0.30737391540643844) I get the following results for that same spot (as shown in the second picture):
x: -6.229400824609087,
​
y: 0.20157559303778091,
​
z: -0.5109691487471469
My understanding is that if these are the world coordinates for the intersect point they should stay relatively similar, but that x coordinate is much different. Which makes sense since that's the axis the camera moves on, but shouldn't it not make a difference for the point of intersection?
My comment will not be related to the camera but I had also an issue about the raycaster and calculating the position of the mouse is more accurate with the following way.
const rect = renderer.domElement.getBoundingClientRect();
mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
mouse.y = - ((event.clientY - rect.top) / rect.height) * 2 + 1;
So the trick to this when there's no mouse available due to a pointer lock is to use the direction of the ray created by the object controls. It's actually pretty simple, but not really out there.
var ray_direction = new THREE.Vector3();
var ray = new THREE.Raycaster(); // create once and reuse
controls.getDirection( ray_direction );
ray.set( controls.getObject().position, ray_direction );

threejs - Defect in rotation - THREE.OrbitControls

I use OrbitControls now but still i have strange bug. It is hard to explain. When i drag mouse down in the begin work normally and then in one moment whole scene begin to rotate in wrong direction and flip my whole scene.
I got warnings :
OrbitControls.js:1103 [Violation] Added non-passive event listener to
a scroll-blocking 'wheel' event. Consider marking event handler as
'passive' to make the page more responsive. See
https://www.chromestatus.com/feature/5745543795965952
Here is my code:
controls = new THREE.OrbitControls(camera, renderer.domElement);
//controls.addEventListener( 'change', render ); // call this only in static scenes (i.e., if there is no animation loop)
controls.enableDamping = true; // an animation loop is required when either damping or auto-rotation are enabled
controls.dampingFactor = 0.05;
controls.screenSpacePanning = true;
controls.minDistance = 14;
controls.maxDistance = 120;
controls.maxPolarAngle = Math.PI / 3;
controls.target.set(5, 4, -20);
I need to limit rotation , disable 360 rotating scene.
For example i wanna allow max angle of 45.
Try this, i had a familiar issue and applied it to my code and worked
camera.up = new THREE.Vector3( 0, 0, 1 );
Did you take a look at the documentation? It outlines four different properties to limit angles of rotation. These are the defaults:
// How far you can orbit vertically, upper and lower limits.
// Range is 0 to Math.PI radians.
controls.minPolarAngle = 0; // radians
controls.maxPolarAngle = Math.PI; // radians
// How far you can orbit horizontally, upper and lower limits.
// If set, must be a sub-interval of the interval [ - Math.PI, Math.PI ].
controls.minAzimuthAngle = - Infinity; // radians
controls.maxAzimuthAngle = Infinity; // radians
Edit:
The above solution is for OrbitControls, which is not what the original question asked. TrackballControls does not offer the ability to limit angles of rotation.

Nearby culling in Three.js despite camera not being near face

I've run into an issue after switching to a logarithmic depth buffer in Three.js. Everything runs nicely except for nearby culling of the ground as described in the following photos:
As you can see, the camera is elevated above the ground significantly. The character box that is shown is about 2 units above the ground, and my camera is set up as such:
var WIDTH = window.innerWidth
, HEIGHT = window.innerHeight;
var VIEW_ANGLE = 70
, ASPECT = WIDTH / HEIGHT
, NEAR = 1e-6
, FAR = 9000;
var aspect = WIDTH / HEIGHT;
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.rotation.order = 'YXZ';
So my NEAR parameter is nowhere near 2, the distance between the camera and the ground. You can see in the second image that I even move up the camera with my PointerLockControls and still run into the issue.
Can anyone diagnose my issue?
I also tested my issue by seeing if this bug occurred with a static camera as well. It does.
Additionally, this problem only happens with the logarithmic depth buffer, as it doesn't happen with the default depth buffer.
I have my camera as a child to a controls object, which is defined as follows:
controls = new THREE.PointerLockControls(camera);
controls.getObject().position.set(strtx, 50, strtz);
scene.add(controls.getObject());
camera.position.z += 2;
camera.position.y += .1;
Here's the relevant code for PointerLockControls:
var pitchObject, yawObject;
var v = new THREE.Vector3(0, 0, -1);
THREE.PointerLockControls = function(camera){
var scope = this;
camera.rotation.set(0, 0, 0);
pitchObject = new THREE.Object3D();
pitchObject.rotation.x -= 0.3;
pitchObject.add(camera);
yawObject = new THREE.Object3D();
yawObject.position.y = 10;
yawObject.add(pitchObject);
var PI_2 = Math.PI / 2;
var onMouseMove = function(event){
if (scope.enabled === false) return;
var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0;
var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0;
yawObject.rotation.y -= movementX * 0.002;
pitchObject.rotation.x -= movementY * 0.002;
pitchObject.rotation.x = Math.max( - PI_2, Math.min( PI_2, pitchObject.rotation.x ) );
};
this.dispose = function() {
document.removeEventListener( 'mousemove', onMouseMove, false );
};
document.addEventListener( 'mousemove', onMouseMove, false );
this.enabled = false;
this.getObject = function () {
return yawObject;
};
this.getDirection = function() {
// assumes the camera itself is not rotated
var rotation = new THREE.Euler(0, 0, 0, "YXZ");
var direction = new THREE.Vector3(0, 0, -1);
return function() {
rotation.set(pitchObject.rotation.x, yawObject.rotation.y, 0);
v.copy(direction).applyEuler(rotation);
return v;
};
}();
};
You'll also notice that it's only the ground that is being culled, not other objects
Edit:
I've whipped up an isolated environment that shows the larger issue. In the first image, I have a flat PlaneBufferGeometry that has 400 segments for both width and height, defined by var g = new THREE.PlaneBufferGeometry(380, 380, 400, 400);. Even getting very close to the surface, no clipping is present:
However, if I provide only 1 segment, var g = new THREE.PlaneBufferGeometry(380, 380, 1, 1);, the clipping is present
I'm not sure if this intended in Three.js/WebGL, but it seems that I'll need to do something to work around it.
I don't think this is a bug, I think this is a feature of how the depthbuffer in the different settings works. Look at this example. On the right, the depthbuffer can't make up its mind between the letters in "microscopic" and the sphere. This is because it has lower precision at very small scales and starts doing rounding that oscilates between one object and another, and favoring draw order over z-depth.
It's always a tradeoff. If you want to forgo this issue, you can try raising the scale of your scene overall, so that the 'near' of the camera will never be so close to something that it can round it off - so just work in a number range that won't be rounded in the exponential model of the logarithmic z-buffer.
Also another question - how is the blue defined, because maybe what you're seeing is not clipping from being too close, but confusion between whether blue or the ground is closer. If it's just a blue box encompassing everything, you could try making it bigger and more distant from the ground.
EDIT:
Okay, this looks like it should work. so I would start looking for edge cases. What can you do to change the scene so that it does work? What can you do to make other things start breaking?
try moving the landscape far down/ far up (does the issue persist when looking up instead of down at it, does it persist even when it's unquestionably far away?)
try rotating the landscape
try changing the camera FOV
try changing the camera far plane
try changing the camera near plane from 1e-x notation to .000001, .0001,.01,.1, etc. see what effect it has.
console.log the camera object in your render function, and make sure that the fov, near, far etc, is as you set on setup and that it's not being overwritten and reset to default. check what it prints out in chrome's developer tools, you can browse the whole object, check position, parent name, all that stuff.
basically i don't see a blatant mistake, so I would guess it's something hard to spot, or it's working exactly as it should. Figure out what you can do to improve the effect/ make it worse, and that will clarify a direction to go.
A good rule of thumb for debugging is to try and just take things to an extreme, without trying to fix it, or keep the code true to its purpose, and just see in what way it breaks further/changes. report back when you find something.

three.js pointerlock multiplayer enemies rotation not working properly

I am creating a little multiplayer game basing on this three.js pointerlock example
I need to rotate the enemies avatars on the actual player screen, so he can see the direction they are looking at, but I cannot figure out how to properly do it
At the moment each enemy is sending an object with his position and rotation to the server
{
position: controls.getObject().position,
rotation: controls.getDirection(new THREE.Vector3())
}
the server receives it and sends to the actual player who with a function selects the respective enemy mesh (avatar) in the map and applies the position/rotation to it
var object = scene.getObjectByName(data.player);
object.position.x = data.position.x;
object.position.y = data.position.y;
object.position.z = data.position.z;
object.rotation.x = data.rotation.y;
object.rotation.y = data.rotation.x;
object.rotation.z = data.rotation.z;
But only the position works, the rotation is not working properly: the resulting rotation axes seem to be inverted and they also vary depending on the direction the actual player is looking at
Edit:
I tried also to "clone" it into another camera with different rotation.order as described here
var camera2 = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 1, 1000);
camera2.rotation.order = 'YXZ';
var yawObject = controls.getObject();
var pitchObject = yawObject.children[0];
camera2.rotation.set(pitchObject.rotation.x, yawObject.rotation.y, 0);
and making enemies send
{
position: controls.getObject().position,
rotation: camera2.rotation
}
but rotation is still wrong
I realised that I can rotate objects into pointerlock direction in this way:
var dir = controls.getDirection(new THREE.Vector3());
var dis = 100;
mesh.lookAt({x:d.x * dis, y:d.y * dis, z:d.z * dis});
so I can make enemies send their direction instead of rotation, then make them look at a small distance in that direction.

convert Point3D To Screen2D get wrong result in three.js

I use function like this in three.js 69
function Point3DToScreen2D(point3D,camera){
var p = point3D.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * window.innerWidth;
vector.y = -(vector.y - 1) / 2 * window.innerHeight;
return vector;
}
It works fine when i keep the scene still.
But when i rotate the scene it made a mistake and return wrong position in the screen.It occurs when i rotate how about 180 degrees.It shoudn't have a position in screen but it showed.
I set a position var tmpV=Point3DToScreen2D(new THREE.Vector3(-67,1033,-2500),camera); in update and show it with css3d.And when i rotate like 180 degrees but less than 360 , the point shows in the screen again.Obviously it's a wrong position that can be telled from the scene and i haven't rotate 360 degrees.
I know little about the Matrix,So i don't know how the project works.
Here is the source of project in three.js:
project: function () {
var matrix;
return function ( camera ) {
if ( matrix === undefined ) matrix = new THREE.Matrix4();
matrix.multiplyMatrices( camera.projectionMatrix, matrix.getInverse( camera.matrixWorld ) );
return this.applyProjection( matrix );
};
}()
Is the matrix.getInverse( camera.matrixWorld ) redundant? I tried to delete it and it didn't work.
Can anyone help me?Thanks.
You are projecting a 3D point from world space to screen space using a pattern like this one:
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
vector.z = 0;
However, using this approach, points behind the camera are projected to screen space, too.
You said you want to filter out points that are behind the camera. To do that, you can use this pattern first:
var matrix = new THREE.Matrix4(); // create once and reuse
...
// get the matrix that maps from world space to camera space
matrix.getInverse( camera.matrixWorld );
// transform your point from world space to camera space
p.applyMatrix( matrix );
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than zero.
// check if point is behind the camera
if ( p.z > 0 ) ...
three.js r.71
Like the example above but you can check vector.z to determine if it's in front.
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
//behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
To delve a little deeper into this answer:
// behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
This is not true. The mapping is not continuous. Points beyond the far
plane also map to z-values greater than 1
What exactly does the z-value of a projected vector stand for? X and Y are in normalised clipspace [-1,1] , what about z?
Would this be true?
projectVector.project(camera);
var inFrontOfCamera = projectVector.z < 1;
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than 1.
//check if point is behind the camera
if ( p.z > 1 ) ...
NOTICE: If this condition is satisfied, then the projected coordinates need to be centrosymmetric
{x: 0.233, y: -0.566, z: 1.388}
// after transform
{x: -0.233, y: 0.566, z: 1.388}

Resources