I have a cube and my camera located center of the cube.
Iām just trying to set tile materials on the parts of cube as show on camera.
How can I know what part of cube showed on camera ?
For example:
Top,Left,Right,Bottom ā showed part of object
Front ā showed Full
Related
I'm currently following a tutorial on how to get a 3d model view in three.js
But I notice that whenever I pan the camera (right-click and drag), the center of rotation changes.
Is it possible to make the axis of rotation stay in the world center/object center?
Many thanks!
I have a very large SphereBufferGeometry that I project an equirectangular 360 image onto in order to create a 360 scene. I have two functions to my application: 1.) to set the initial view of which part of the scene should be viewed at scene load and 2.) to always load the scene at that saved coordinates.
To set the initial view of which part of the scene should be viewed at scene load:
I can use OrbitControls to move the camera to look at a certain direction of this sphere, and I can save the position of the camera when I look at a 360 scene position I like.
To always load the scene at that saved coordinates:
I can set the position of the camera to this previously saved position and view the scene at my favorite starting location.
This works well, but I do not want to set camera position each time a 360 scene loads. Rather, my requirement is to load the scene with camera in a neutral position, and rather the sphere mesh is rotated on the sphere such that I am looking at the [x,y,z] position of my favorite position set earlier.
Is it at all possible to take a camera position and rotation and use those values to rotate or position a mesh on my sphere?
Can I use OrbitControls to rotate the entire scene/sphere instead of the camera onClick?
I am currently using three.js and trying to create a 3D experience with a semi-transparent material that can be viewed from all angles. I've noticed that depending on the camera angle, only certain portions of the mesh are semi-transparent and will show the content behind them. In this example below I've created two half cylinders and applied the same transparent material with the stack overflow logo. The half cylinder on the left properly shows the logo on the closest surface, as well as the surface behind it. The half cylinder on the right only shows the logo on the closest surface and fails to render the logo that wraps behind it. However, it does properly render the background image so the material is still treated correctly as transparent:
If I spin the orbital camera around 180 degrees the side that originally failed to see through now works and the other side exhibits the wrong behavior. This leads me to believe it's related to the camera position / depth sorting. The material in this case is a standard MeshPhongMaterial with transparent set to true, side as DoubleSide, and a single map for the transparent stack overflow logo. The geometry is formed from an opened ended CylinderGeometry. Any help would be greatly appreciated!
I want to have the camera move with a mesh. Foward/backward motion was easy:
if (Input.is_key_pressed(KEY_W)):
self.translation.z -= movement_speed; # relies on camera script to move camera
if (Input.is_key_pressed(KEY_S)):
self.translation.z += movement_speed;
I just put those short blocks on both the camera and the mesh. But I can't figure out how to rotate the mesh about the camera while rotating the camera. If I just rotated the mesh, it would rotate about it's center point and end up unaligned with the camera. In photoshop, you can set anchor points to rotate a layer about a point other than the center. How can I set an anchor point to another element/node in godot?
EDIT:
The solution to rotation was pretty simple. All I had to do was make the camera a child of the mesh I wanted it to follow. But then the camera did not move with the mesh... How do I get the motion to work?
I'm working on a app that renders a 3D scene that simulates a real space into an iPhone making its screen become a hollow box, as seen in the sketch below:
(note the camera position order down below)
The problem is on how to calculate the camera parameters to make the box look real fixed to the screen edges.
Is this feasible through SceneKit?
In this configuration the camera's zNear plane corresponds to the screen of the iPhone. From that you can can derive a z position from the camera's field of view and the screen's width (see here).