Artifacting/shaking occuring for THREE.Sprite() when animating orthographic camera - three.js

I've gotten sprites to work fine in my game engine, but I'm noticing some odd issues when moving the camera. Mainly, the sprites' pixels kind of 'shake' or go 'wavey' when the camera moves. You can see an example of this here (please watch in HD) http://youtu.be/om3EhKsGd9M
I'm setting the following properties on my sprite texture and sprite material, and my textures are 64 x 64 pixels in size:
spriteTexture.magFilter = THREE.NearestFilter;
spriteTexture.minFilter = THREE.NearestMipMapNearestFilter;
var spriteMaterial = new THREE.SpriteMaterial(
{
map:sheet,
useScreenCoordinates:true,
transparent:true,
side:THREE.DoubleSide,
});
Any ideas on a solution for this issue? I'd wager that this is somehow related to how the sprites are rendered when mapped into 3D space.

Related

Map Texture to Plane without distortion

In Three.js, how can I change the way in which a texture gets mapped onto a plane?
Let's assume we have a 1x1 plane and a 16:9 image. How can I control the way in which that image gets mapped onto the plane?
By default, the image gets "squished". I would like it to maintain its aspect ratio and have any overlap get "cut off". Is there a way to configure the material or texture to do this, or would I use a shader? If so, what would it need to look like?
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
PS: In future, I would also like to be able to zoom into and out of the image on mouse hover without affecting the size of the plane, so would think a shader might be better?
A Texture already has several properties built-in that can do what you're looking for.
const texture = textureLoader.load("whatever.png");
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
// Sets the pivot point to the center of the texture
texture.center.set(0.5, 0.5);
// Make the texture repeat 0.5625 times in the x-axis to match 16:9 ratio
let ratio = 9 / 16;
texture.repeat.set(ratio, 1);
// Scale texture up to "zoom" into it
let zoom = 0.5;
texture.repeat.set(ratio * zoom, 1 * zoom);
You can read more about the .repeat .center and even .rotation properties in the Texture docs. Just keep in mind that repeating a texture is a bit counter-intuitive because you're doing the inverse of scaling a texture. So to scale a texture by 2, you have to tell it to repeat 1/2 times.

No cubemap ("material.envMap") reflection on polygon faces with orthographic cam

Why won't a MeshPhongMaterial's envMap property work on polygonal faces when viewed through an orthographic camera?
It works on spheres but not an IcosahedronGeometry, for example. If I set the detail parameter of the IcosahedronGeometry to 2+ (more faces), the envMap begins to show. But if I switch to perspective cam, the envMap is fully visible even with detail of 0.
This is what it looks like with perspective cam, note the cubemap reflection of some clouds:
This is what it looks like with orthogonal cam and detail is 0, note the lack of cubemap reflection (please ignore the warping of the image):
Orthogonal cam, detail is 1; cubemap reflection is back:
The only difference between these two versions of the script is the camera.
Here's the code I'm using to create this object:
import uvGridImg from './img/grid.png';
import nxImg from './img/nx_50.png';
import pxImg from './img/px_50.png';
import nyImg from './img/ny_50.png';
import pyImg from './img/py_50.png';
import nzImg from './img/nz_50.png';
import pzImg from './img/pz_50.png';
const envTexture = new THREE.CubeTextureLoader().load([
pxImg, //right
nxImg, //left
pyImg, //top
nyImg, //bottom
pzImg, //back
nzImg, //front
])
envTexture.mapping = THREE.CubeReflectionMapping
const texture = new THREE.TextureLoader().load(uvGridImg)
const icosahedronGeometry = new THREE.IcosahedronGeometry(1, 0)
const material = new THREE.MeshPhongMaterial()
material.map = texture;
material.envMap = envTexture;
///An attempt to explicitly set every potentially relevant property...
material.envMapIntensity = 0.0;
material.transparent = false;
material.opacity = 1.0;
material.depthTest = true;
material.depthWrite = true;
material.alphaTest = 0.0;
material.visible = true;
material.side = THREE.FrontSide;
material.flatShading=true;
material.roughness = 0.0;
material.color.setHex(0xffffff);
material.emissive.setHex(0x0);
material.specular.setHex(0xffffff);
material.shininess = 30.0;
material.wireframe = false;
material.flatShading = false;
material.combine = THREE.MultiplyOperation;
material.reflectivity = 1.0;
material.refractionRatio = 1.0;
const icosahedron = new THREE.Mesh(icosahedronGeometry, material)
icosahedron.position.x = 0
scene.add(icosahedron);
For an MVCE, please see the example from this tutorial (you will have to add your own orthographic cam to compare with the given perspective cam). Here are image files for the textures.
UPDATE It seems like all non-spherical geometries cannot render a cubemap reflection correctly thru an orthographic cam. The plane, cylinder, box geometries all fail to render a environment map reflection beyond painting the entire face one uniform reflective color. The sphere, lathe, *hedron geometries (at high levels of detail) will render cubemap reflections.
Is there any way around this? This seems like a huge limitation while working with orthographic cameras.
This is the expected behavior.
With perspective cameras, the reflective "rays" separate as they get further away from the camera, reflecting a wider angle of the envMap.
With an ortho camera these reflective "rays" do not separate because they're parallel. So the reflection on a flat face is a very narrow angle of the envMap.
See this demo I quickly put together to demonstrate what you're seeing:
It seems to work on spheres because when the parallel orthographic "rays" bounce off a rounded surface, these rays grow wider apart. They are no longer parallel (as is the case with a Perspective camera).
You can see the reflections still work on your demo because the faces alternate between light and dark as you rotate them. You're just looking at a much narrower segment of the envMap:

ThreeJS: how add PointLight to scene with PNG textures?

I have a scene with one mesh with PNG textures. I taken PointLight code from ThreeJS example and added to my project:
var intensity = 15;
var pointLight = new THREE.PointLight( color, intensity, 20 );
pointLight.castShadow = true;
pointLight.shadow.camera.near = 1;
pointLight.shadow.camera.far = 60;
pointLight.shadow.bias = - 0.005;
But I not see light and shadows on my mesh:
I created a codepen for reproduce this case
How I can resolve this issue?
There were multiple problems with your pen:
You have to tell the renderer to globally enable shadow maps like so:
renderer.shadowMap.enabled = true
You have to tell the extruded shape to receive shadows:
mesh.receiveShadow = true;
The extruded shaped used MeshBasicMaterial in your pen. This is an unlit material which means it does not react on lights. The codepen below now uses MeshPhongMaterial. You might want to consider to add an ambient or hemisphere light so all parts of your mesh are lit.
Codepen: https://codepen.io/anon/pen/vPPJxW?editors=1010
three.js R102

Lens Flare strange rotation while rotating camera

While rotating camera, lens flare has some rotation based on camera angle, please see this fiddle: http://jsfiddle.net/e4kf3u7t/1/
I want to avoid that, making lens flare always facing camera without rotation, like Sprite behavior: http://jsfiddle.net/e4kf3u7t/3/
Sample code:
var flareColor = new THREE.Color( 0xffffff );
var lensFlare = new THREE.LensFlare( textureFlare0, 200, 0.0, THREE.NormalBlending, flareColor );
lensFlare.position.y = 100;
lensFlare.position.z = 200;
scene.add(lensFlare);
The reason why I want to use Lensflare instead of Sprite is as you can see in fiddle, lens flare disappearing when it center hiding behind another geometry, Sprite doesn't act like that.

zbuffer problems using 2D planes in THREE.JS

I am composing 2D planes with textures. I have 3 levels.
A background plane at z=0,
black shapes for conections at z=0.1 and
small planes with textures at z=0.2
the problem is that when I move the camera planes seems to change z position.
Planes are drawn in incorrect Z, it depend on the position of the camera. Moving the camera it changes again and looks very ugly.
Maybe I need to activate some ZBuffer property for correct drawing
WebGL init is like this and planes are exactly the same (only Z coord change)
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
renderer._microCache = new MicroCache(); //cache de imagenes
renderer.setClearColor(0xeeeeee, 1);
document.body.appendChild(renderer.domElement);
// add directional light source
var directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(1, 1, 1300).normalize();
scene.add(directionalLight);
//background plane
plane = new THREE.Mesh(new THREE.PlaneGeometry(200000, 200000, 1, 1), new THREE.MeshLambertMaterial({ color: 0xffffff, opacity: planeOpacity, transparent: true }););
plane.position.z = 0;
scene.add(plane);
Other planes are exactly the same but greater Z position
Help please!
Thanks!
Palomo
The thing you're seing is probably z-fighting. Internally, depth is represented by integer in GPU so there is only fixed number of distinct z-s between camera's near and far planes. The solution is to either move your planes apart or narrow camera's near-far range down.

Resources