Lights on BufferGeometry not correct - three.js

I got a BufferGeometry that uses MeshLambertMaterial and VertexColors. When I apply lights as the gif below shows, the lights gets distorted when the BufferGeometry consists of different sizes of faces with the same color. If I use different colors for each face with smaller faces (1x1) the lights looks good. I've tried to calculate faceNormals but that doesn't solve the issue.
Anything I miss?
Here is a gif showing the issue

You are using vertex lighting, instead you probably want per pixel lighting.
https://en.wikipedia.org/wiki/Per-pixel_lighting
http://www.learnopengles.com/tag/per-vertex-lighting/
It is my understanding that three.js almost exclusively focuses on PBR lighting/shaders. With this mindset, i'm not even sure what lambert should be used for. Either way, Lambert only supports per vertex lighting, not per pixel, so you will always get these artifacts from interpolating against different topologies. There are no limitations that prevent this from working different, it's just by design.
MeshPhongMaterial on the other hand does per-pixel lighting, but because of all the physical correctness, you might have a hard time removing the specular term, leaving only the lambert.
If you opt for this, you might find yourself having to do something like this
var myBlackTexture = obtainTextureThatIsBlack()
var myMaterial = new THREE.MeshPhongMaterial({... specularMap: myBlackTexture})
https://github.com/mrdoob/three.js/issues/10808
Summary:
Anything I miss?
You missed the arbitrary three design caveats :) It will remain a mistery why this material exists as is, and why it doesn't just have a flag to flip between vertex/fragment lighting.

Related

Blurring light reflections in meshStandardMaterial

in Three.js, I have a standard-material mesh cube sitting between two RectAreaLight strips. By default, it renders like this:
I wanted to add shadows under the cube, so I set the renderer's shadowMap:
renderer.shadowMap.enabled = true;
When I do this, suddenly my cube is reflecting the light strips:
(to clarify, only the two horizontal strips are RectAreaLight instances. The 3 vertical strips shown are just planes.)
This effect is kind of neat, but it's too sharp: it looks pixellated because my cube is relatively low-poly, and each facet has a different angle.
My understanding is that MeshStandardMaterial's 'roughness' property can be used to diffuse and soften reflections, but neither roughness nor metalness have any effect on this effect; no matter what I set it to, these harsh reflections remain.
I'd love to find a way to soften the reflections. I've also noticed that lines in general tend to look pretty aliased, even though I've set renderer.antialias to true. Perhaps a better anti-aliasing strategy would kill two birds with one stone?
Docs mention no shadow support with rectAreaLights:
https://threejs.org/docs/#api/en/lights/RectAreaLight
I wouldn't be surprised if there were other issues with the materials as well... RectAreaLights are implemented in a different branch of the render pipeline, as indicated by the requirement of including custom uniforms, but I don't know enough about the details to give you a better answer.
I would love to see other responses to this though, because RectAreaLights can create some really cool looks...

Transparency with complex shapes in three.js

I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!

How can I solve z-fighting using Three.js

I'm learing three.js and I faced a z-fighting problem.
There are two plane object, one is blue and the other is pink.
And I set the positions using the flowing codes:
plane1.position.set(0,0,0.0001);
plane2.position.set(0,0,0);
Is there any solution in three.js to fix all the z-fighting problem in a big scene?
I ask this problem because I'm working on render a BIM(Building Information Model, which is .ifc format) on the web.
And the model itself have so much faces which are so closed to each other. And it cause so much z-fighting problems as you can see:
Is three.js provide this kind of method to solve this problem so that I can handle this z-fighting problem just using a couple of code?
Three.js has given a general solution like this:
var renderer = new THREE.WebGLRenderer({ logarithmicDepthBuffer: true });
The demo is provided also here:
https://threejs.org/examples/webgl_camera_logarithmicdepthbuffer.html
It changes the precision of depth buffer, Which generally could resolve the z-fighting problem in a distance.
At least for the planes on your screenshot, you can solve that problem without switching to the logarithmicDepthBuffer. Try to set depthWrite on the material to false for the planes. Sometimes you also have to override renderOrder for meshes.
There is an example
.depthWrite Whether rendering this material has any effect on the depth buffer. Default is true.
When drawing 2D overlays it can be useful to disable the depth writing in order to layer several things together without creating z-index artifacts.
.renderOrder This value allows the default rendering order of scene graph objects to be overridden although opaque and transparent objects remain sorted independently. When this property is set for an instance of Group, all descendants objects will be sorted and rendered together. Sorting is from lowest to highest renderOrder. Default value is 0.
What is your PerspectiveCamera's zNear and zFar set to. Try a smaller range. Like if you currently have 0.1, 100000 use 1, 1000 or something. See this answer
https://stackoverflow.com/a/21106656/128511
Or consider using a different type of depth buffer
I just stumbled across z-fighting using multiple curved planes with front and backside textures placed along the z-axis of the scene. Even though depthWrite would remove the artifacts, I kinda lost the correct visual placements of my objects in space. Flatshading did the trick for me. With enough segments, the light bouncing is perfectly fine and z-fighting is gone.

Fixed texture size in Three.js

I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriƫnted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.

Shadow Mapping - artifacts on thin wall orthogonal to light

I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal

Resources