Three.js: Make raycaster's direction the same as object's - three.js

all!
I'm currently making a game in THREE.js and while I've got the 'vertical' collision with the ground sorted with a raycaster, I am having trouble with horizontal collision. I've consulted a few tutorials, but they don't do what I want them to.
Basically, I've got a load of cubes. I've given them a random y-axis rotation and used translate to make them move 'forward' - pretty basic stuff. How do I get a raycaster to point the same direction as the cube, almost like a line sticking out of its 'nose'? i.e. how do I get a raycaster's direction to copy that of an object?
It's been a while since I've been on here, but kind regards anyway,
Matthew

Related

Arbitrary direction for DeviceOrientationControls

When using DeviceOrientationControls, I need to allow the user to reset their view to an arbitrary direction. Basically if I'm sitting in a chair with limited range of head motion, I want to allow the camera to switch to a different direction (how I trigger that change is not important).
alphaOffsetAngle works great for resetting the view to look left, right, or behind, but not for looking up or down (or left/right, but rotated).
I tried adding offset angles for Beta and Gamma, but that isn't as straightforward as I hoped. I also tried adding the camera to an Object3D and rotating the parent. That sortof worked, but the controls got all wonky when the camera's parent was rotated.
lookAt() is pretty much what I want, but the DeviceOrientationControls update() seems to blow that away.
Does anyone have a working example of this arbitrary camera direction with the deviceorientationcontrols?
This question is similar these, but I have not found a workable solution:
Add offset to DeviceOrientationControls in three.js
and:
DeviceOrientationControls.js - Calibration to ideal starting center

Refraction after leaving a transparent (glass) object

I'm currently writing a fragment shader, which (besides other things) imitates the refraction effect on a glass sphere.
So, when a ray enters the sphere, the ray changes direction. So far so good. Now, when the refracted ray leaves the glass object, does it change direction again? I'm pretty sure it does, but I've been poking around the Internet and I've found different opinions (e.g. at the bottom of this site it's clearly stated that there is no change in direction).
Thanks in advance.
Yes it changes... the angle from air to glass refraction is the same then from glass to air.
You can implement it very easily. First you have to render your scene in a cubemap which is centered inside the sphere.
the 2nd renderstep uses the normalvector and the camera position to point vector, with them you can use the function refract() to calculate the vector of the refraction.
You have to calculate where the ray goes out of the sphere and you also can use the refract funtion again. you only have to calculate the normalvector of the outputplace again.
the 3rd step is to use the texture() function of the cubemap and put the outputvector as coordinate inside the function.

What's the use of plane in this example

I was looking at demo by Mrdoob on dragging cubes.
http://threejs.org/examples/webgl_interactive_draggablecubes.html
I have understood the basic code to add cubes and some other basic functionality. But i'm not getting what for? PLANE has been used in the code. I am understanding its being used obviously for dragging the cubes somehow, but why hasn't object's TRANSLATION property been used here?
view-source:http://threejs.org/examples/webgl_interactive_draggablecubes.html
And why are we subtracting the PLANE position from offset and then offset from intersects[ 0 ].point
In this example, the plane is being used as something for a ray to intersect, in order to get a position to work with in the scene. If there were nothing to intersect the ray with, there wouldn't be any reliable way to know what the mouse is pointing at.

OpenGL : Line jittering with large scene and small values

I'm currently drawing a 3D solar system and I'm trying to draw the path of the orbits of the planets. The calculated data is correct in 3D space but when I go towards Pluto, the orbit line shakes all over the place until the camera has come to a complete stop. I don't think this is unique to this particular planet but given the distance the camera has to travel I think its more visible at this range.
I suspect its something to do with the frustum but I've been plugging values into each of the components and I can't seem to find a solution. To see anything I'm having to use very small numbers (E-5 magnitude) for the planet and nearby orbit points but then up to E+2 magnitude for the further regions (maybe I need to draw it twice with different frustums?)
Any help greatly appreciated...
Thanks all for answering but my solution to this was to draw it with the same matrices that were drawing the planet since it wasn't bouncing around as well. So the solution really is to code better really, sorry.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources