I am building an application in AFrame and I want to constrain the viewers movement, that is I want to limit where the camera can go in the scene. For example I have a a-plane that is the floor and I want the camera to stop moving when it reaches 0 on the Z axis to stop the camera from going through the floor or stop again if it reaches 20 on Z axis. I also wish to limit the movement in x,y directions. There are no obstacles in the scene besides the a-plane. Is creating a navigation mesh my only option or is there an easier way to constrain movement? Thanks!
I don't know of built in tools to do this, but you could do it with programming (this sounds pretty easy). You could create a custom component, attached to the camera, with a tick handler, that records the position of the camera in world space and stores in in a variable (camPosPrevFrame). Then create a function to test if the current position is outside of the bounds. If so, set the camera coordinate on the axis that has exceeded its limit, to the previously recorded boundary (camPosPrevFrame). If you are simply testing whether the camera is on one side of an orthagonal plane (say the world space xy plane), that is pretty simple math (camera.getWorldPosition.x>someAmount). If you have a more complex situation, there are ways to test if a point is on either side of any arbitrary plane (it involves the dot product).
Related
I'm using ThreeJS, but this is a general math question.
My end goal is to position an object in my scene using 2D screen space coordinates; however, I want a specific z position in the perspective projection.
As an example, I have a sphere that I want to place towards the bottom left of the screen while having the sphere be 5 units away from the camera. If the camera were to move, the sphere would maintain its perceived size and position.
I can't use an orthographic camera because the sphere needs to be able to move around in the perspective projection. At some point the sphere will be undocked from the screen and interact with the scene using physics.
I'm sure the solution is somewhere in the camera inverse matrix, however, that is beyond my abilities at the current moment.
Any help is greatly appreciated.
Thanks!
Your post includes too many questions, which is out of scope for StackOverflow. But I’ll try to answer just the main one:
Create a plane Mesh using PlaneGeometry.
Rotate it to face the camera, place it 5 units away from the camera.
Add it as a child with camera.add(plane); so whenever the camera moves, the plane moves with it.
Use a Raycaster’s .setfromCamera(cords, cam)
then
.intersectObject(plane)
method to convert x, y screen cords into an x, y, z world position where it
intersects the plane. You can read about it in the docs.
Once it’s working, make the plane invisible with visible = false
You can see the raycaster working in this official example: https://threejs.org/examples/#webgl_geometry_terrain_raycast
I'm working on my raytracer and it seems I can't manage to handle the case where the direction vector of my camera is parallel to the vector (0,1,0).
I think it is linked to my way to compute the vector up and right for camera but I can't manage to find a work around.
Here is how I do it:
cam_up = vector_cross(cam_dir, {0, 1, 0});
camp_right = vector_cross(cam_right, cam_dir);
Can somebody enlighten me?
You have the correct formula for calculation of an orthogonal axis from a single cameraOut vector. However, as has been stated this formula will not account for the camera roll, which could be any direction in the plane perpendicular to the camera direction. This will be apparent when moving a camera across the pole (y-axis) as there will be undesireable behavior (yes it will be correctly aimed, but no doubt the roll won't be desired).
For more information, look into gimbal lock.
The roll itself is not really incorrect, however in reality for this camera transition to be smooth and appear correct (rather than suddenly flip or spin as it's direction becomes 0,1,0), you need to correct any roll incurred. This is a rotation about the cameraOut axis and ideally should be relative to the previous cameraAlong. This means in order to maintain the correct roll (or perceived correct roll) you need to consider the camera POSE (position and orientation) from the previous frame and ensure the roll is mitigated. Of course, if the camera doesn't move (i.e. your rendering a frame with a static camera position) you do not have a previous camera state so the position cannot be calculated and instead must be explicitly defined as part of the scene definition.
Personally I store an entire orthogonal axis for a camera so the orientation and roll is always clearly defined. This is only for completeness, to be honest you don't need to store the entire axis, 2 vectors cameraOut and cameraAlong (the third one being cameraUp) are enough. cameraAlong is dependant on the handed-ness of your coordinate system (e.g. for initial camera position say position (0,0,0) in left hand coordinate system, the cameraAlong direction will be in the right direction in relation to the viewer, for right hand system the cameraAlong would be the other way around. The cameraUp and cameraOut would are the same in both coordinate systems).
Hope this helps.
P.S This isn't ray tracing specific and the same principles apply for OpenGL/DirectX or any 3D representation.
I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.
I'm working on a simple Three.js demo that uses OrbitControls.js.
I'd like to change the behavior of panning in OrbitControls. Currently, when you pan the camera, it moves the camera in a plane that is perpendicular to the viewing direction. I'd like to change it so that the camera stays a constant distance from the ground plane and moves parallel to it. Google Earth uses a similar control setup.
Edit: I should have mentioned this detail in the first place, but I'd also like the point where you click and start dragging to remain directly under the cursor throughout the entire drag. There needs to be that solid connection between the mouse movement and what the user expects to happen on the screen. Otherwise, it feels as though I'm 'slipping' when I try to move around the scene.
Can someone give me a high-level explanation of how this might be done (with or without OrbitControls.js)?
EDIT: OrbitControls now supports panning parallel to the "ground plane", and it is the default.
To pan parallel to screen-space (the legacy behavior), set:
controls.screenSpacePanning = true;
Also available is MapControls, which has an API similar to that of Google Earth.
three.js r.94
Some time ago I was working on exactly this issue, i.e. adaptation of OrbitControls.js to map navigation.
Here's the code of MapControls.js.
Here's the demo of the controls.
I figured it out. Here's the overview:
Store the mousedown event somewhere.
When the mouse moves, get the new mousedown event.
For each of those points, find the points on the plane where those clicks are located (You'll need to put the points into camera space, transform them into world space, then fire a ray from the camera through each point to find their intersections with the plane. This page explains the ray-plane intersection test).
Subtract the world-space start intersection point from the world-space end intersection point to get the offset.
Subtract that offset from the camera's target point and you're done!
In the case of OrbitControl.js, the camera always looks at the target point, and its position is relative to that point. So when you change the target, the camera moves with it. Since the target always lies on the plane, the camera moves parallel to that plane (as long as you're panning).
You should set your camera 'up' to z axe:
camera.up.set(0,0,1)
And then, the main problem with OrbitControl is its panUp() function. It should be fixed.
My pull request : https://github.com/mrdoob/three.js/pull/12727
y axe is relative to camera axes and should be relative to a fixed plan in the world. To define the expected y axe, make a 90° rotation of camera x axe, based on world z axe.
v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
v.applyAxisAngle( new THREE.Vector3( 0, 0, 1 ), Math.PI / 2 );
v.multiplyScalar( distance );
panOffset.add( v )
Enjoy!
Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).