I'm trying to make a rubik cube game in webgl using three.js (you can try it here).
And I have problems to detect on witch axis I have to rotate my cube according the rotation of the cube. For instance, if the cube is in original position/rotation, if I want to rotate the left layer from down to up, I must make a rotation on the Y axis. But I rotate my cube 90 degrees on Y, I will have to rotate It on the Z axis to rotate my left layer from down to up.
I'm trying to find a way to get the correct rotation axis according the orientation of the cube.
For the moment I check witch vector of the axis of the rotation matrix of the cube is most parallel with the vector(0,1,0) if I want to move a front layer from down to up. But it do not works in edge cases like this for instance :
I guess there is some simple way to do that, but I'm not good enough in matrix and mathematical stuff :)
An AxisHelper can show the aixs of the scene which you could determine the orientation with.
var axishelper = new THREE.AxisHelper(40);
axishelper.position.y = 300;
scene.add(axishelper);
You could also log your cube and check the position and rotation properties with Chrome Developer Tools or Firebug.
You can store the orientation of each cube in its own 4x4 matrix (i.e. a "model" matrix) that tells you how to get from the cube's local coordinates to the world's coordinates. Now, since you want to rotate the cube around to an axis (i.e. vector) in world coordinates, you need to translate the axis into cube coordinates. This is exactly what the inverse of the model matrix yields.
Related
Im trying to create a rubik cube but i keep having problems with rotations, when is a single face rotation everythinks looks fine but when i combine 2 rotations it get weird
One Rotation 90 degrees on the left face - Rotation on the x axis
After second rotation on the back face with 45 degrees - rotation on the Z axis
I also found out if i do the X axis rotation it appears that i have to do Y axis rotation on some cubes and Z axis rotation on another for all to work good but that solution doesnt work.
I created a group consisted of the frame of the cube and the faces.
Loads everything then if it is Last, adds the group to the scene.
The this.cube is an array of groups.
So, for anyone familiar with Google Maps, when you zoom, it does it around the cursor.
That is to say, the matrix transformation for such a zoom is as simple as:
TST^{-1}*x
Where T is the translation matrix representing the point of focus, S the scale matrix and x is any arbitrary point on the plane.
Now, I want to produce a similar effect with a spherical camera, think sketchfab.
When you zoom in and out, the camera needs to be translated so as to give a similar effect as the 2D zooming in Maps. To be more precise, given a fully composed MVP matrix, there exists a set of parallel planes that are parallel to the camera plane. Among those there exists a unique plane P that also contains the center of the current spherical camera.
Given that plane, there exists a point x, that is the unprojection of the current cursor position onto the camera plane.
If the center of the spherical camera is c then the direction from c to x is d = x - c.
And here's where my challenge comes. Zooming is implemented as just offsetting the camera radially from the center, given a change in zoom Delta, I need to find the translation vector u, colinear with d, that moves the center of the camera towards x, such that I get a similar visual effect as zooming in google maps.
Since I know this is a bit hard to parse I tried to make a diagram:
TL;DR
I want to offset a spherical camera towards the cursor when I zoom, how do i pick my translation vector?
I have two spheres on which panoramic image is mapped. I want to make smooth transition between 2 panoramas with fade effect. for both panorama I have initial camera direction set for best view.
Now the issue is if user is looking at some camera angle in first panorama and then he clicks on some button to switch panorama I want to give fade effect and directly land on initial camera angle of another pano.
But as both pano are sharing common camera, I cannot play with camera to achieve it so I devised following solution -
image depicting problem
rotate target sphere so that it looks at desired camera direction.
rotate target sphere so that it looks at existing camera direction.
fadeout source sphere.
camera look at new panos camera direction.
rotate back pano to initial orientation.
Here I am not able to find formula of rotating panorama to look at camera. (like camera is static and pano is rotated to achieve similar effect as if we are moving camera).
Can somebody please help in finding formula to rotate pano(sphere) relative to camera.
Matrix is a very powerful tool to solve rotation problem. I made a simple to explain.
At the beginning, the camera is in the center of the left sphere and face to initial viewpoint, then, the camera face to another point, now, the camera's rotation has changed, next, camera move to the center of the right sphere and keep its orientation. and we need to rotate the right sphere. If C is the point that we want to make the camera to face, first, we rotate A to B, second, we rotate some angle θ equal to C to A.
So, how to do like that? I used matrix, because A is an initial point, matrix in an identity matrix, and the rotation from A to C can be represented by a matrix, calculated by three.js function matrix4.lookAt(eye,center,up) which 'eye' is the camera position, 'center' is coordinate of C, and 'up' is camera up vector. Then, rotation matrix from C to A is the inverse matrix of the matrix from A to C. Because the camera is face to B now, so the camera's matrix equals to the rotation matrix from A to B.
Finally, we put it all together, the final rotation matrix can be written in:rotationMatrix = matrixAtoB.multiply(new THREE.Matrix4().getInverse(matrixAtoC));
jsfiddle example.
This way is a matrix way, you can also solve the problem with the spherical polar system.
I have a 3D matrix in Matlab that was created using a volume MRI scan. I then use matlab toolbox iso2mesh (vol2surf) to convert this volume to a surface mesh and then extract the nodes/vertex coordinates and faces of this mesh.
However I find that this mesh is in the wrong coordinate system. I have tried to use the imrotate to rotate the matrix as well as rot90 to rotate the nodes matrix but it rotates the image only around the y-axis while I need rotation around both x and y axes.
Does anyone have any advice on what function I can use for this?
Thanks!
I'm building a simple 3D drag and drop interface in processing, and want to detect when the mouse rolls over an object. I would imagine that I need to do some matrix translations to the 3D model coordinates to get them into screen space and so on ...
I have a simple version of this working, the problem is that when camera is moved around the scene the coordinates I get go haywire.
So how do I translate the tile coordinates into screen space (since the screenX & screenY aren't working properly)?
UPDATE:
I eventually found two examples from the Processing site on how to do this. Thanks to villintehaspam.
http://processing.org/hacks/hacks:picking
This problem is called picking. Search for mouse picking and you get lots and lots of hits.
Basic theory is this:
Get x,y coords from the mouse click.
Convert these to x,y,z coordinates in eye coordinates (i.e -1 <= x <= 1, -1 <= y <= 1, z=near/far clip distance, if you have a normal projection).
Transform these coordinates by the inverse of the projection matrix to get world coordinates.
You now have a ray from the camera position, with the direction towards the world coordinates you just got.
Make a ray-object intersection test with the objects you want to consider. Choose the object that intersects the ray that is closest to the ray origin (camera position).