Using Meshbuilder with AR - google-project-tango

Is it possible to use the Meshbuilder with the AR camera? I'm using the Meshbuilder with AR but I don't see the mesh. Breakpoint so show the mesh is being built. My theory is the projection used for the AR camera doesn't match what's used for the depth camera and meshing.

I'd say take the world position of the mesh transform and calculate its yaw and pitch against the cameras look vector - if that angle is bigger than fieldofview/2, then the mesh is offscreen, otherwise something else is wrong - and you can always do lookat(mesh.transform.position) as well

Related

Convert screen space X,Y position into perspective projection with a specific z position

I'm using ThreeJS, but this is a general math question.
My end goal is to position an object in my scene using 2D screen space coordinates; however, I want a specific z position in the perspective projection.
As an example, I have a sphere that I want to place towards the bottom left of the screen while having the sphere be 5 units away from the camera. If the camera were to move, the sphere would maintain its perceived size and position.
I can't use an orthographic camera because the sphere needs to be able to move around in the perspective projection. At some point the sphere will be undocked from the screen and interact with the scene using physics.
I'm sure the solution is somewhere in the camera inverse matrix, however, that is beyond my abilities at the current moment.
Any help is greatly appreciated.
Thanks!
Your post includes too many questions, which is out of scope for StackOverflow. But I’ll try to answer just the main one:
Create a plane Mesh using PlaneGeometry.
Rotate it to face the camera, place it 5 units away from the camera.
Add it as a child with camera.add(plane); so whenever the camera moves, the plane moves with it.
Use a Raycaster’s .setfromCamera(cords, cam)
then
.intersectObject(plane)
method to convert x, y screen cords into an x, y, z world position where it
intersects the plane. You can read about it in the docs.
Once it’s working, make the plane invisible with visible = false
You can see the raycaster working in this official example: https://threejs.org/examples/#webgl_geometry_terrain_raycast

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

ThreeJS calculation for object in relation to camera position and orientation

It seems like this would be a pretty common problem, but I can't find an example online and I'm too much of a math noob...
Using ThreeJS, I have a library to do spatial audio positioning (https://github.com/tmwoz/hrtf-panner-js) based on user position, but my code assumes the camera looks straight ahead and doesn't move. Since my camera is moving, I need to get the xyz position of a 3D object in relation to the camera's position and orientation.
//finds the object in world coordinates
var p = new THREE.Vector3();
p.setFromMatrixPosition(visual.object.matrixWorld);
this.audioTrack.updateHrtf(p.x, p.y, p.z);
How do a translate the object into camera-space coordinates? Thanks for your help!
Note: I know that the WebAudio API has a mechanism to do this simply, but it doesn't have the power of the HRTF (Head Related Transfer Function) library, which sounds much better.
To transform a Vector3 vec from world space to camera space, do this:
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
vec.applyMatrix4( camera.matrixWorldInverse );
three.js r.73

Three JS How to make ray or rays from camera to all object in rederer to check faceIndex

I have some project for child http://kinosura.kiev.ua/sova/ and i need to check faceIndex of all cubes in screen.
Now i use intersections array from mouse, but is working only when user pointer at the cube.
How to make ray or rays from camera to all object to check faceIndex ?
I try to make four rays to cubes but if i set cube.position as origin of like this:
raycaster.setFromCamera( cube1.positoin , camera )
I get empty array of intersections.
I also try to set static 2d vector as origin (get coordinate from mouse) but i have relative renderer size and this coordinate all time change... its not work(
Thanks for answer anyway.
I suggest that you try another approach It appears that your cubes do not cover one another, relative to the camera view. So use the surface normals, and compare them to the view direction to determine if they are facing the camera or facing away from the camera by a simple one-per-polygon dot product.
When you are creating your geometry, before adding it a THREE.Mesh call .generateFaceNormals() on it.
Instead of ray casting, iterate through all faces, grab the surface normal of the face, transform relative to the view (inverse transpose of the object's matrix), then dot(). might sound complicated, at first, but it's actually just a couple of steps and much faster than doing a lot of raycasts (which will probably include this anyway!)

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources