So, im building an architectural visualization with Three.js, and one of the things the user should be able to do is to click on things and orbit around them. The problem is that the camera is able to clip through wall. I fixed that by assigning each clickable object its own limiting azimuth and polar angles. Now the Problem is that azimuth angles go from -PI to +PI and its impossible to limit between for example 1.5, and -2.4 because its limiting the "wrong" way. I hope this graphic explains that a little better:
Heres a link to the live version:
(You control by clicking on the ground)
https://jim-fx.com/modern/
As you can see, on objects on the right side of the room the limiting works flawless, but on the cabinet and the vases the camera clips through the wall.
If anyone could help me that would be amazing. And any other tipps are welcome aswell.
Greetings, Max
There is several solution to your problem. One is to implements a kind of collision detection with some real or virtual wall for your camera, wich stops the rotation. However, I guess your are looking for something simpler to implement.
As i don't know Three.js very well, I will provides you a generic solution, but which should be easily adaptable to Three.js.
The first thing is to do not use the built-in Three.js orbit control, but to implement your own, where you control all your transformations. And, this is in fact very easy.
To create an orbitable camera, you simply have to crate:
A "null" transformable object, which mean a simple transformable entity that does not embed any shape (is not rendered, is invisible, but exists). I hope Three.js provides such elementary thing.
A camera, which should be itself another transformable.
Once you have this, you simply parent the camera to the "null" object. Now parented to the "null" object, if you rotate the "null" object, you rotate the camera with. Then to orbits, you now have to move back the camera from the parent object:
Null Camera
+ - - - - - - - - - |>
Like this, the "null" object becomes the camera "look at point", and if you rotate the "null" object around Y (I believe Three.js use Y up), you controls the camera azimuth. If you rotate the "null" object in X or Z (depending coordinate system), you will control the camera altitude. Then, you even can control the camera forward-backward to close up to the "look at point" by moving your camera in its local Z axis..
Well, you now have an orbit-camera easy to control. But your problem is not yet solved: How to make this control Pi / -Pi possible in every camera initial orientation ?
Simple: You create second "null" transform object, name it "the socle", and you parent the first one to this last one: Like this, the rotation of the camera "look at point" is always local, and you can now rotate "the socle" to give your "Orbital camera" group, an initial orientation in the world space.
In fact, it is pretty like creating virtual gimbals. I hope I was clear, with pictures this would be more easy to visualize...
Related
I am trying to create a 3D Visualization of an RC airplane in Threebox. The RC plane sends live telemetry, including:
GPS Coordinates
Gyro sensor data, showing the pitch, roll and heading of the plane.
I have now loaded a Model of an airplane in Threebox, no problems with that.
My problem comes down to the rotation of the plane. I want the plane object to represent the current orientation of the RC plane. Since I have the live telemetry from the flight controller, this should be possible.
In the Documentation, I have found this method, which seemed like exactly what i needed:
plane.setRotation({x: roll, y: pitch, z: yaw/heading})
And it basically works. I can rotate the Plane around its axes. But things get messed up when I combine the rotations.
For example: When I just update the roll axis, the Object behaves just like I want it to. However, when i change the heading of the plane by 90 degrees, the roll axis suddenly becomes the pitch axis. It seems to me, that the axes of the Plane object don't rotate with the plane itself.
I've prepared a recreation of the issue on jsfiddle. You can change the heading of the plane using the slider in the bottom right.
I've been stuck on this for days, would be super happy for any help!
There are lots of issues with your jsfiddle that prevent it from running. To isolate an issue and make it easier to test you should eliminate as many variables as possible - you're using two third-party libraries that will play a big hand in how transformations behave, particularly threebox.
I would recommend sticking with three.js's built in transformation tools unless you specifically need some lat/lng transformations, or other transformations to move between a local cartesian space and a global coordinate system. In this case, a very basic plane.setRotationFromEuler(new THREE.Euler(yaw, pitch, roll)) should do the trick. Be aware of how much order in euler rotations can affect the outcome, and that three.js uses radians for all its rotations, not degrees.
My project combines a projection screen with a head tracking device, where the screen should act as a window through which I could see my virtual "world". Basically, this.
Initially, I thought this would be easy: Map the camera position to the head tracking, have it point towards my window in the virtual world, adjust camera parameters to fit its frustum to the window, and voilà!
Except it doesn't work because I'm viewing the window (both real and virtual) at an angle, so the regular perspective camera doesn't do the trick: If I understand correctly, that camera 'input' is always rectangular, but I need to 'fit' it in a trapezoïd instead.
I think I should be able to achieve that by making my own projection matrix, but I'm a bit lost on how to do that: I have played a bit with basic matrix transforms (translate, scale, rotate), but I have zero experience with more complex stuff (ie perspective).
My best guess for now is trying to deduce the projection matrix from known transformed points (the corners of my window => the corners of the screen) but I feel like it's going to be quite expensive to do that each frame, and that doesn't account for the perspective inside the "window".
thanks for any help!
Word up SO,
I'm trying to pull together something akin to an 'anchor look' component in A-Frame – the idea was supposed to be like a combination of aframe-href-component and aframe-look-at-component, where clicking a link to an anchor (Link) would make the camera "look at" the entity whose id="" matches the anchor.
I thought I had a working concept just by modifying the look-at component a bit, i.e. poll for hash updates and Object3D.lookAt() the anchor, but there seems to be a problem I wasn't accounting for that probably comes from my poor understanding of Euler/quaternion/etc:
When the camera's rotation gets updated by lookAt(), it seems to lose its previous rotational reference – dragging the camera has strange rotation results, and the results get stranger the more you've rotated the camera before calling lookAt().
I've set up a basic codepen at http://codepen.io/wosevision/pen/JWRMyK containing my version of the component to demonstrate; what is causing this and what is the proper way?
You know the perspective camera nested in a group, when you drag the mouse to change the rotation, the group rotation changed, but the perspective camera didn't. the rotation would be strange if the perspective camera rotation is not (0,0,0).
It's very hard to set camera rotation, If you really want to do that, you need to look deep int the implementation of camera control and modify.
When using DeviceOrientationControls, I need to allow the user to reset their view to an arbitrary direction. Basically if I'm sitting in a chair with limited range of head motion, I want to allow the camera to switch to a different direction (how I trigger that change is not important).
alphaOffsetAngle works great for resetting the view to look left, right, or behind, but not for looking up or down (or left/right, but rotated).
I tried adding offset angles for Beta and Gamma, but that isn't as straightforward as I hoped. I also tried adding the camera to an Object3D and rotating the parent. That sortof worked, but the controls got all wonky when the camera's parent was rotated.
lookAt() is pretty much what I want, but the DeviceOrientationControls update() seems to blow that away.
Does anyone have a working example of this arbitrary camera direction with the deviceorientationcontrols?
This question is similar these, but I have not found a workable solution:
Add offset to DeviceOrientationControls in three.js
and:
DeviceOrientationControls.js - Calibration to ideal starting center
Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).