I am looking to create a wedge shape using other geometry that can be manipulated by the user.
The height, width and z rotation of Cylinder can be changed by the user
The angle of Plane A can be change by the user
I want to create a wedge shape using Points A through F.
What is the best way to determine the Points A through F so that I can use them to create a ConvexGeometry? Or is there a different method for creating this geometry?
Related
I have added a pyramid mesh into the scene and I can rotate it about the x, y and z axes individually.
What I need to do is add an object to the scene that is 5 coloured dots to represent the 5 vertices of the pyramid, and then rotate this object.
I know the coordinates of the vertices but I'm not sure how I would implement this. To rotate the pyramid mesh I am using mesh.rotation.x, mesh.rotation.y, mesh.rotation.z.
Should I maybe try to create a custom mesh containing the 5 vertices and use mesh.rotation, or is a different approach easier?
The usual approach for solving this issue is to add the coloured dots as child objects to your pyramid. If you then rotate the pyramid, the dots will rotate to (because the keep their position relative to their parent).
The position of the colored dots are the coordinates of the respective pyramid vertices.
I am trying to draw a line segment from a point a 3D scene to point on a HUD UI. One end of the line segment is specified in 3D e.g. (1.232, -34.12, 4.21) but the other I want to specify in 2D pixel coordinates e.g. (320, 200).
How can I convert the 2D coordinate to a 3D point and have it remain at those pixel coordinates as the camera (Perspective) moves? Initially I thought of taking the 2D position and projecting it onto the near view frustum maybe that would work, but wasn't sure how to do it or if there was a better way?
var vector = new THREE.Vector3(320, 200, 0.5);
vector.unproject(camera);
will return in vector a 3D point which you can use to draw.
If you keep unprojecting as the perspective camera moves you are guaranteed that the 2D point will seem not to move in your HUD.
I´m using the displacementMap attribute in the MeshPhongMaterial with a loaded greyscaled texture. The extrusion/displacement works fine but the normals and faces of the effected mesh.geometry does not get updated.
I used
geometry.computeFaceNormals()
and
geometry.computeVertexNormals().
I want to make a walkable character on the terrain by ray-casting down and receiving the height (y) value of the intersected face/vertice for the chaarcters offset manipulation.
Enclosed an img of the current displaced geometry with the VertexNormalsHelper(red lines) and the FaceNormalsHelper(green lines ) applied.
Does someone know how to update them correctly?
The displacement map will only be applied in the vertex shader, meaning the fundamental geometry isn't actually altered, and the fundamental geometry is what the helper is showing.
You would have to iterate and change the actual vertices in the geometry buffer itself to alter them.
The same of course applies for the raycaster intersection test which will use the fundamental vertices/faces.
I want to make a walkable character on the terrain by ray-casting down and receiving the height (y)
A possible workaround to avoid altering the geometry is to first get the UV position of your character on the plane, then map that to a 2D position on the displacement map. Finally pick the value at that point, scale and use that for height relative to the mesh.
This may even be more efficient than using raycasting, but you might have to do interpolation for positions that isn't on a vertex.
I have two spheres on which panoramic image is mapped. I want to make smooth transition between 2 panoramas with fade effect. for both panorama I have initial camera direction set for best view.
Now the issue is if user is looking at some camera angle in first panorama and then he clicks on some button to switch panorama I want to give fade effect and directly land on initial camera angle of another pano.
But as both pano are sharing common camera, I cannot play with camera to achieve it so I devised following solution -
image depicting problem
rotate target sphere so that it looks at desired camera direction.
rotate target sphere so that it looks at existing camera direction.
fadeout source sphere.
camera look at new panos camera direction.
rotate back pano to initial orientation.
Here I am not able to find formula of rotating panorama to look at camera. (like camera is static and pano is rotated to achieve similar effect as if we are moving camera).
Can somebody please help in finding formula to rotate pano(sphere) relative to camera.
Matrix is a very powerful tool to solve rotation problem. I made a simple to explain.
At the beginning, the camera is in the center of the left sphere and face to initial viewpoint, then, the camera face to another point, now, the camera's rotation has changed, next, camera move to the center of the right sphere and keep its orientation. and we need to rotate the right sphere. If C is the point that we want to make the camera to face, first, we rotate A to B, second, we rotate some angle θ equal to C to A.
So, how to do like that? I used matrix, because A is an initial point, matrix in an identity matrix, and the rotation from A to C can be represented by a matrix, calculated by three.js function matrix4.lookAt(eye,center,up) which 'eye' is the camera position, 'center' is coordinate of C, and 'up' is camera up vector. Then, rotation matrix from C to A is the inverse matrix of the matrix from A to C. Because the camera is face to B now, so the camera's matrix equals to the rotation matrix from A to B.
Finally, we put it all together, the final rotation matrix can be written in:rotationMatrix = matrixAtoB.multiply(new THREE.Matrix4().getInverse(matrixAtoC));
jsfiddle example.
This way is a matrix way, you can also solve the problem with the spherical polar system.
I'm trying to make a rubik cube game in webgl using three.js (you can try it here).
And I have problems to detect on witch axis I have to rotate my cube according the rotation of the cube. For instance, if the cube is in original position/rotation, if I want to rotate the left layer from down to up, I must make a rotation on the Y axis. But I rotate my cube 90 degrees on Y, I will have to rotate It on the Z axis to rotate my left layer from down to up.
I'm trying to find a way to get the correct rotation axis according the orientation of the cube.
For the moment I check witch vector of the axis of the rotation matrix of the cube is most parallel with the vector(0,1,0) if I want to move a front layer from down to up. But it do not works in edge cases like this for instance :
I guess there is some simple way to do that, but I'm not good enough in matrix and mathematical stuff :)
An AxisHelper can show the aixs of the scene which you could determine the orientation with.
var axishelper = new THREE.AxisHelper(40);
axishelper.position.y = 300;
scene.add(axishelper);
You could also log your cube and check the position and rotation properties with Chrome Developer Tools or Firebug.
You can store the orientation of each cube in its own 4x4 matrix (i.e. a "model" matrix) that tells you how to get from the cube's local coordinates to the world's coordinates. Now, since you want to rotate the cube around to an axis (i.e. vector) in world coordinates, you need to translate the axis into cube coordinates. This is exactly what the inverse of the model matrix yields.