threejs : rotating panorama to look at camera direction - three.js

I have two spheres on which panoramic image is mapped. I want to make smooth transition between 2 panoramas with fade effect. for both panorama I have initial camera direction set for best view.
Now the issue is if user is looking at some camera angle in first panorama and then he clicks on some button to switch panorama I want to give fade effect and directly land on initial camera angle of another pano.
But as both pano are sharing common camera, I cannot play with camera to achieve it so I devised following solution -
image depicting problem
rotate target sphere so that it looks at desired camera direction.
rotate target sphere so that it looks at existing camera direction.
fadeout source sphere.
camera look at new panos camera direction.
rotate back pano to initial orientation.
Here I am not able to find formula of rotating panorama to look at camera. (like camera is static and pano is rotated to achieve similar effect as if we are moving camera).
Can somebody please help in finding formula to rotate pano(sphere) relative to camera.

Matrix is a very powerful tool to solve rotation problem. I made a simple to explain.
At the beginning, the camera is in the center of the left sphere and face to initial viewpoint, then, the camera face to another point, now, the camera's rotation has changed, next, camera move to the center of the right sphere and keep its orientation. and we need to rotate the right sphere. If C is the point that we want to make the camera to face, first, we rotate A to B, second, we rotate some angle θ equal to C to A.
So, how to do like that? I used matrix, because A is an initial point, matrix in an identity matrix, and the rotation from A to C can be represented by a matrix, calculated by three.js function matrix4.lookAt(eye,center,up) which 'eye' is the camera position, 'center' is coordinate of C, and 'up' is camera up vector. Then, rotation matrix from C to A is the inverse matrix of the matrix from A to C. Because the camera is face to B now, so the camera's matrix equals to the rotation matrix from A to B.
Finally, we put it all together, the final rotation matrix can be written in:rotationMatrix = matrixAtoB.multiply(new THREE.Matrix4().getInverse(matrixAtoC));
jsfiddle example.
This way is a matrix way, you can also solve the problem with the spherical polar system.

Related

How to "focus zoom" on a spherical camera?

So, for anyone familiar with Google Maps, when you zoom, it does it around the cursor.
That is to say, the matrix transformation for such a zoom is as simple as:
TST^{-1}*x
Where T is the translation matrix representing the point of focus, S the scale matrix and x is any arbitrary point on the plane.
Now, I want to produce a similar effect with a spherical camera, think sketchfab.
When you zoom in and out, the camera needs to be translated so as to give a similar effect as the 2D zooming in Maps. To be more precise, given a fully composed MVP matrix, there exists a set of parallel planes that are parallel to the camera plane. Among those there exists a unique plane P that also contains the center of the current spherical camera.
Given that plane, there exists a point x, that is the unprojection of the current cursor position onto the camera plane.
If the center of the spherical camera is c then the direction from c to x is d = x - c.
And here's where my challenge comes. Zooming is implemented as just offsetting the camera radially from the center, given a change in zoom Delta, I need to find the translation vector u, colinear with d, that moves the center of the camera towards x, such that I get a similar visual effect as zooming in google maps.
Since I know this is a bit hard to parse I tried to make a diagram:
TL;DR
I want to offset a spherical camera towards the cursor when I zoom, how do i pick my translation vector?

How to calculate a shape based on movement using opencv or other open source lib?

Here is the original image, the green colour is the background, and blue is the shape.
If my eye go closer to the shape, the blue square will be bigger like this:
If my eye go left, the shape will work like this:
I already know my eyes movement position, but how can I calculate what will the shape change? Any recommends? Thanks.
You need know the camera matrix http://en.wikipedia.org/wiki/Camera_matrix it will define the way which 3D coord will be projected to your sreen coordinates. If you know orientation of your camera, then you know 3d coords of object points relative to camera. Applying camera matrix transform to these 3d coords will give you 2d projections in you screen coordinates.

How to find orientation of rubik cube?

I'm trying to make a rubik cube game in webgl using three.js (you can try it here).
And I have problems to detect on witch axis I have to rotate my cube according the rotation of the cube. For instance, if the cube is in original position/rotation, if I want to rotate the left layer from down to up, I must make a rotation on the Y axis. But I rotate my cube 90 degrees on Y, I will have to rotate It on the Z axis to rotate my left layer from down to up.
I'm trying to find a way to get the correct rotation axis according the orientation of the cube.
For the moment I check witch vector of the axis of the rotation matrix of the cube is most parallel with the vector(0,1,0) if I want to move a front layer from down to up. But it do not works in edge cases like this for instance :
I guess there is some simple way to do that, but I'm not good enough in matrix and mathematical stuff :)
An AxisHelper can show the aixs of the scene which you could determine the orientation with.
var axishelper = new THREE.AxisHelper(40);
axishelper.position.y = 300;
scene.add(axishelper);
You could also log your cube and check the position and rotation properties with Chrome Developer Tools or Firebug.
You can store the orientation of each cube in its own 4x4 matrix (i.e. a "model" matrix) that tells you how to get from the cube's local coordinates to the world's coordinates. Now, since you want to rotate the cube around to an axis (i.e. vector) in world coordinates, you need to translate the axis into cube coordinates. This is exactly what the inverse of the model matrix yields.

ray intersect planes with dynamic geometry returns empty array

* SOLVED *
It was not about 0,0,0 or distortion. It´s super weird but I found out that compute geometry as a Sphere worked! (even at the tiles corners, where you should think a sphere wouldnt cover it does)
http://threejs.org/docs/#Reference/Core/Geometry
computeBoundingSphere();
Problem desc. follows below.
Hey I'm building a webgl wall for my portfolio site, I need ray intersection to know both when user hovers over the wall and when they click what plane they're clicking on so I can redirect them to correct project.
http://www.martinlindelof.com
What i do is adding all planes on xyz(0,0,0) then I'm using dynamic geometry to place out their vertices on a point grid that's affected by a repelling particle (using traer)
now when I'm doing ray intersect (using examples from threejs r49) I get an empty array back, nothing hit.
could this be because all planes origins are in 0,0,0. should I maybe on each frame not only moving vertices but the entire plane?
or is something else.
(face normals seems to be pointing in the right direction, I see the texture on the plane and it's not inverted as it should be if it was the face backside with double sided planes. guess it's not by default in three.js when creating plane)
Ray got problems with object position 0,0,0 (because somewhere will be divided by position and will result in not dividable). Try another position.

Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible.
Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny.
So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem.
Screenshots: (1) raycasting voxels while... ...(2) the camera remains... ...(3) strictly axis-aligned.
Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone.
Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks:
camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"!
Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right.
Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode:
fx and fy: pixel positions x and y
rayPos: vec3 for the ray starting position in world-space (calculated as below)
rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal
rayStep: a temporary vec3
camPos: vec3 for the camera position in world space
camRad: vec3 for camera rotation in radians
pmat: typical perspective projection matrix
The algorithm / pseudocode:
// 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin"
rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0
// 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0
rayPos.MultMat(num.NewDmat4RotationY(camRad.Y))
// 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen"
rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist
// 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep
rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y))
// set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector
rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z
// perspective projection
rayDir.Normalize()
rayDir.MultMat(pmat)
// before traversal, the ray starting position has to be transformed from origin-relative to campos-relative
rayPos.Add(camPos)
I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.
It's a lot easier if you picture the system as a pinhole camera rather than anything else. Instead of shooting rays from the surface of a rectangle representing your image, shoot the rays from a point, through the rectangle that will be your image plane, into the scene. All the primary rays should have the same point of origin, only with slightly different directions. The directions are determined using basic trig by which pixel in the image plane you want them to go through. To make the simplest example, let's imagine your point is at the camera, and your image plane is one unit along the z axis, and two units tall and wide. That way, the pixel at the upper-left corner wants to go from (0,0,0) through (-1, -1, 1). Normalize (-1, -1, 1) to get the direction. (You don't actually need to normalize the direction just to do ray intersection, but if you decide not to, remember that your directions are non-normalized before you try to compute the distance the ray has travelled or anything like that.) For every other pixel, compute the point on the plane it wants to go through the way you've already been doing, by dividing the size of the plane by the number of pixels, in each direction.
Then, and this is the most important thing, don't try to do a perspective projection. That's necessary for scan-conversion techniques, to map every vertex to a point on the screen, but in ray-tracing, your rays accomplish that just by spreading out from one point into space. The direction from your start point (camera position, the origin in this example), through your image plane, is exactly the direction you need to trace with. If you were to want an orthographic projection instead (and you almost never want this), you'd accomplish this by having the direction be the same for all the rays, and the starting positions vary across the image plane.
If you do that, you'll have a good starting point. Then you can try again to add camera rotation, either by rotating the image plane about the origin before you iterate over it to compute ray directions, or by rotating the ray directions directly. There's nothing wrong with rotating directions directly! When you bear in mind that a direction is just the position your ray goes through if it starts from the origin, it's easy to see that rotating the direction, and rotating the point it goes through, do exactly the same thing.

Resources