Where does the vertical shadow on the left cube come from?
https://jsfiddle.net/yz0sfr35/
//I'm using:
renderer.shadowMap.type = THREE.BasicShadowMap;
What you are seeing is pixelation in your shadows due to the fact that your shadow camera frustum is orders-of-magnitude larger than your scene.
One solution in your case is to reduce the angle of your spotlight:
light.angle = Math.PI / 180;
Keep your shadow frustums tight around your scene for quality shadows.
https://jsfiddle.net/yz0sfr35/2/
three.js r.87
Related
Camera view
Camera settings:
const camera = new THREE.PerspectiveCamera(32, aspectRatio(), 1, 1000)
camera.position.set(30, 30, 30)
I am adding camera to cube , cube.add(camera)
and then setting camera.lookAt(cube.position)
When i apply rotation.y to cube mesh in loop to keep cube rotating, it seems that cube is not rotating but floor is rotating.
I want cube to rotate as it rotates when Orbit control is used and Keeping the camera at same point with reference to cube.
If i remove lookAt(cube), cube rotates accordingly but camera don't follows cube properly when it is moving.
Right now i'm using orbit controls, and i can only rotate 180 degrees in the up-and-down direction. In another direction i can rotate forever, i think it is the z direction. Anyways, how can i make it completely limiteless for all rotation directions?
here's my code now, i tried it with and without infinity:
this.scene_threeD = new THREE.Scene();
this.camera_threeD = new THREE.PerspectiveCamera( 75, width_threeD / height_threeD, 0.1, 1000 );
this.renderer_threeD = new THREE.WebGLRenderer({ canvas: threeDCanvas,
preserveDrawingBuffer: true,
antialias: true });
this.renderer_threeD.setSize( width_threeD, height_threeD);
controls = new THREE.OrbitControls(this.camera_threeD, this.renderer_threeD.domElement);
controls.maxPolarAngle = Infinity;
controls.minPolarAngle = -Infinity;
controls.maxAzimuthAngle = Infinity;
controls.minAzimuthAngle=-Infinity;
controls.update();
The problem with an "orbital camera" is that (by definition) it always tries to keep the camera "up" pointing upwards. This means the camera orientation is undefined when you are looking straight up or down. That is why three.js implements a makeSafe() method that keeps the polar angle just within a +/- 90 degrees angle.
If you were to remove this limitation, you would probably see the camera instantly flip directions when passing the 90 degrees angle (or worse). This is generally undesired behaviour in an application.
To sum things up: if you want limitless rotation, you don't want an orbital camera. This is not a technical but a conceptual limitation.
I am attempting to displace a sphere with 6 generated RGBA DataTextures onto a spherized cube. The displacement leaves a visible seam along the edges of the textures:
I have tried setting, and tweaking, the following settings:
texture.generateMipmaps = true;
texture.wrapS = THREE.ClampToEdgeWrapping;
texture.wrapT = THREE.ClampToEdgeWrapping;
texture.minFilter = THREE.NearestMipMapLinearFilter;
And have also checked the geometry with a MeshNormalMaterial and with the texture as a map on a MeshBasicMaterial with no appearance of discontinuities.
I assume there is an issue with computing normals / tangents at the edges, but I've recompute normals after spherizing (with geom.computeVertexNormals()) and still see no improvement.
Figured out the issue was related to the way in which I was standardizing gl_FragCoord onto the cube face while generating the textures initially. I was doing:
vec2 cubeCoord = (gl_FragCoord.xy * 2.0 / uResolution.xy) - 1.0;
Which will generally look fine, except at the edges of the cube faces as it results in the shared edge coordinates being off by 0.5 (in gl_FragCoord space). A lot of debugging later, updating to:
vec2 cubeCoord = (gl_FragCoord.xy - (uResolution.xy / 2.0)) / ((uResolution.xy / 2.0) - 0.5);
And using texture.minFilter = texture.magFilter = THREE.LinearFilter; worked a charm by guaranteeing that the outer pixels on each cube face's texture are shared exactly by each bordering face.
I have a sphere centered in front of my camera with a known radius and distance from the camera. How can I adjust the camera's field of view (FOV) to exactly fit the the camera to the sphere within an arbitrary viewport size?
This answer is similar but I want to adjust the FOV instead of the camera's distance
To adjust the FOV to fit the sphere, I needed to use inverse trigonometric functions to calculate the angle from the triangle formed from the distance to the sphere and the furthest visible point on the sphere.
Triangle that will give the correct angle:
// to get the fov to fit the sphere into the camera
var vFOV = 2 * Math.asin(sphereRadius / distance);
// get the project's aspect ratio to calculate a horizontal fov
var aspect = this.width / this.height;
// more trig to calculate a horizontal fov, used to fit a sphere horizontally
var hFOV = 2 * Math.atan(Math.tan(vFOV / 2) / aspect);
This will give the answer in radians. Multiply the hFOV or vFOV to degrees fov * (180 / Math.PI) and apply to camera.fov.
I initially ran into the trap of using the wrong triangle. As this answer states
"The sphere is being clipped for the same reason that if you stand close to a large sphere you can't see its "north pole"
Don't do this; shown is the wrong triangle for a cropped view:
Is there any way to configure the camera in Three.js so that when a 2d object (line, plane, image) is rendered at z=0, it doesn't bleed (from perspective) into other pixels.
Ex:
var plane = new THREE.Mesh(new THREE.PlaneGeometry(1, 1), material);
plane.position.x = 4;
plane.position.y = 3;
scene.add(plane);
...
// Get canvas pixel information
context.readPixels(....);
If you example the data from readPixels, I always find that the pixel is rendering into its surrounding pixels (ex: 3,3,0 may contain some color information), but would like it to be pixel perfect if the element that is draw is on the z=0 plane.
You probably want to use THREE.OrthographicCamera for the 2d stuff instead of THREE.PerspectiveCamera. That way they are not affected by perspective projection.
Which pixels get rendered depends on where your camera is. If your camera for example t z=1 then a lot of pixels will get rendered. If you move your camera to z=1000 then you see, due to perspective, maybe only 1 pixel will get rendered from your geometry.