Refraction after leaving a transparent (glass) object - raytracing

I'm currently writing a fragment shader, which (besides other things) imitates the refraction effect on a glass sphere.
So, when a ray enters the sphere, the ray changes direction. So far so good. Now, when the refracted ray leaves the glass object, does it change direction again? I'm pretty sure it does, but I've been poking around the Internet and I've found different opinions (e.g. at the bottom of this site it's clearly stated that there is no change in direction).
Thanks in advance.

Yes it changes... the angle from air to glass refraction is the same then from glass to air.
You can implement it very easily. First you have to render your scene in a cubemap which is centered inside the sphere.
the 2nd renderstep uses the normalvector and the camera position to point vector, with them you can use the function refract() to calculate the vector of the refraction.
You have to calculate where the ray goes out of the sphere and you also can use the refract funtion again. you only have to calculate the normalvector of the outputplace again.
the 3rd step is to use the texture() function of the cubemap and put the outputvector as coordinate inside the function.

Related

Three.js: Make raycaster's direction the same as object's

all!
I'm currently making a game in THREE.js and while I've got the 'vertical' collision with the ground sorted with a raycaster, I am having trouble with horizontal collision. I've consulted a few tutorials, but they don't do what I want them to.
Basically, I've got a load of cubes. I've given them a random y-axis rotation and used translate to make them move 'forward' - pretty basic stuff. How do I get a raycaster to point the same direction as the cube, almost like a line sticking out of its 'nose'? i.e. how do I get a raycaster's direction to copy that of an object?
It's been a while since I've been on here, but kind regards anyway,
Matthew

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Convert coordinates of a child object to world coordinates

I'm quite new to three.js and lacking some basic understanding of the coordinate systems obviously.
I have an Object3D "group" that has some children (planes). I use "group" to rotate the group of planes, which works fine. Now camera can move and parent object can rotate. One can click on the planes to select them. What I want now is to let the selected plane fly into the camera.
If I just move the plane to the camera position it flys in any direction but mostly not to the camera. Certainly because "group" seems to be the "world" for the child objects. If I move a plane along the z-axis the plane move along the z-axis of the parent.
I don't understand which coordinates I need to take (or transform) to move the plane bound to "group" in front of the camera.
Basically I demoed with three.js what famo.us did, just spent some two hours on it or so. I faked the wanted effect with an additional plane that is not grouped and which I can just move to camera without transformations. The demo is available here:
http://hwg.rattat.net/famo.html.
Would be nice if somebody could tell me how to get this working. I could still live with the fake, when I would be able to place the additional plane exactly over the selected plane.
Thanks in advance,
Christian
The question of converting local coordinates to world coordinates has been addressed at THREE.js: Calculate world space position of a point on an object . There might also be helping information at how to: get the global/world position of a child object .

Programmatic correction of camera tilt in a positioning system

A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources