How to find the coordinates of the viewfield corners of my Three.js camera? - three.js

My Three.js app has a static perspective camera looking on (0,0,0). How can I find the x/y coordinates in the y=0 plane of the corners of the camera's viewfield? The app covers the entire web browser, so this would correspond to the corners of the web browser. I want to render 3D models between those corners.

I want to render 3D models between those corners.
Just having the mentioned corner points is not sufficient to determine whether the user can see an object or not. The camera also has a near/far plane and also a perspective which you should take into account.
I suggest you use a different workflow and create an instance of THREE.Frustum based on the camera's projection screen matrix. The code looks like so:
const frustum = new THREE.Frustum();
const projScreenMatrix = new THREE.Matrix4();
projScreenMatrix.multiplyMatrices( camera.projectionMatrix, camera.matrixWorldInverse );
frustum.setFromProjectionMatrix( _projScreenMatrix );
You can then use methods like Frustum.intersectsObject() or Frustum.intersectsSprite() to determine whether 3D objects are in the view frustum or not.
This is actually the way WebGLRenderer performs view frustum culling.

Related

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

How can I add an event listener to a sprite on THREE js? [duplicate]

I want to realize a 3D interactive globe with three.js and I wonder if there is a way to intersect over Sprites primitive with the Raycaster?
If you check the source code for RayCaster at
https://github.com/mrdoob/three.js/blob/master/src/core/Raycaster.js
it would appear that the intersectObject function only checks objects that are instances of THREE.Particle or THREE.Mesh, not THREE.Sprite. Possibly this is because sprites could be set to use screen coordinates, and so a ray that projects into your 3d scene wouldn't make sense in this situation, or if placed in the scene as the sprite image always faces the camera, it doesn't act like your standard 3d mesh.
Perhaps you could attach a PlaneGeometry or a very thin CubeGeometry to the position of your sprite, set its rotation to the rotation of the camera so that it is always parallel to the view plane of the camera like the sprite is, and then check for intersections with the mesh instead?

Raycasting through a custom camera projection matrix

After modifying my main camera's projection matrix, the ScreenPointToRay method that use ray casting begin to fail, so the method which detect touched object use raycast fail too. Is there any way to use ScreenPointToRay method with a custom camera projection matrix?
If you made a custom camera projection matrix, then you probably know where is the user pointing to, how about casting a ray yourself and not using the helper?
If you have a problem with translation of the cursor to world position, there is a good approximation - take four angles on approxiamtely the edges of your camera's viewpoint (top of the viewport horizontal center, left of the viewport vertical center etc.) and interpolate between them.

Intersect Sprites with Raycaster

I want to realize a 3D interactive globe with three.js and I wonder if there is a way to intersect over Sprites primitive with the Raycaster?
If you check the source code for RayCaster at
https://github.com/mrdoob/three.js/blob/master/src/core/Raycaster.js
it would appear that the intersectObject function only checks objects that are instances of THREE.Particle or THREE.Mesh, not THREE.Sprite. Possibly this is because sprites could be set to use screen coordinates, and so a ray that projects into your 3d scene wouldn't make sense in this situation, or if placed in the scene as the sprite image always faces the camera, it doesn't act like your standard 3d mesh.
Perhaps you could attach a PlaneGeometry or a very thin CubeGeometry to the position of your sprite, set its rotation to the rotation of the camera so that it is always parallel to the view plane of the camera like the sprite is, and then check for intersections with the mesh instead?

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources