I'm learing three.js and I faced a z-fighting problem.
There are two plane object, one is blue and the other is pink.
And I set the positions using the flowing codes:
plane1.position.set(0,0,0.0001);
plane2.position.set(0,0,0);
Is there any solution in three.js to fix all the z-fighting problem in a big scene?
I ask this problem because I'm working on render a BIM(Building Information Model, which is .ifc format) on the web.
And the model itself have so much faces which are so closed to each other. And it cause so much z-fighting problems as you can see:
Is three.js provide this kind of method to solve this problem so that I can handle this z-fighting problem just using a couple of code?
Three.js has given a general solution like this:
var renderer = new THREE.WebGLRenderer({ logarithmicDepthBuffer: true });
The demo is provided also here:
https://threejs.org/examples/webgl_camera_logarithmicdepthbuffer.html
It changes the precision of depth buffer, Which generally could resolve the z-fighting problem in a distance.
At least for the planes on your screenshot, you can solve that problem without switching to the logarithmicDepthBuffer. Try to set depthWrite on the material to false for the planes. Sometimes you also have to override renderOrder for meshes.
There is an example
.depthWrite Whether rendering this material has any effect on the depth buffer. Default is true.
When drawing 2D overlays it can be useful to disable the depth writing in order to layer several things together without creating z-index artifacts.
.renderOrder This value allows the default rendering order of scene graph objects to be overridden although opaque and transparent objects remain sorted independently. When this property is set for an instance of Group, all descendants objects will be sorted and rendered together. Sorting is from lowest to highest renderOrder. Default value is 0.
What is your PerspectiveCamera's zNear and zFar set to. Try a smaller range. Like if you currently have 0.1, 100000 use 1, 1000 or something. See this answer
https://stackoverflow.com/a/21106656/128511
Or consider using a different type of depth buffer
I just stumbled across z-fighting using multiple curved planes with front and backside textures placed along the z-axis of the scene. Even though depthWrite would remove the artifacts, I kinda lost the correct visual placements of my objects in space. Flatshading did the trick for me. With enough segments, the light bouncing is perfectly fine and z-fighting is gone.
Related
Transparency artefact problem
Hello,
I have an issue with three.js. I import a "big" glb model on my scene which is not transparent, but if the model is covered by itself on the camera view, the foreground become transparent. (as you can see on the picture, the background montain is on foreground)
I tried some solutions like :
depthTest to false on glb material
sortOrder to false
Use logarithmicDepthBuffer
Change material transparent to false
Change alphaTest from 0 to 1 by 0.1 steps
But nothing works. If someone have the solution :)
Thank you !
Rendering of transparent objects cannot be done quite properly. You first need to render any non-transparent objects, and then render transparent surfaces from back to front, so that any new ones blend on top of what was behind it. There are a number of cases where this cannot be done, especially when rendering transparent objects that may overlap themselves.
Fixing this would involve cutting the problematic objects (even single triangles) into smaller pieces so that the ordering can be preserved, and that is often nearly impossible. Since you are working with Three.js, see if you could alter your design so that this isn't a problem, or that the artifacts of incorrect rendering order aren't too much noticeable.
Thanks to donmccurdy which have find my solution on the three.js forum.
Finally my glb file was Transparent :( So there is two solutions.
Solution 1:
Find how the model is transparent and fix it.
Solution 2:
Changing it back to opaque, and restoring the default depthWrite value.
mesh = content.getObjectByName('mesh_0');
mesh.material.transparent = false;
mesh.material.depthWrite = true;
I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!
I got a BufferGeometry that uses MeshLambertMaterial and VertexColors. When I apply lights as the gif below shows, the lights gets distorted when the BufferGeometry consists of different sizes of faces with the same color. If I use different colors for each face with smaller faces (1x1) the lights looks good. I've tried to calculate faceNormals but that doesn't solve the issue.
Anything I miss?
Here is a gif showing the issue
You are using vertex lighting, instead you probably want per pixel lighting.
https://en.wikipedia.org/wiki/Per-pixel_lighting
http://www.learnopengles.com/tag/per-vertex-lighting/
It is my understanding that three.js almost exclusively focuses on PBR lighting/shaders. With this mindset, i'm not even sure what lambert should be used for. Either way, Lambert only supports per vertex lighting, not per pixel, so you will always get these artifacts from interpolating against different topologies. There are no limitations that prevent this from working different, it's just by design.
MeshPhongMaterial on the other hand does per-pixel lighting, but because of all the physical correctness, you might have a hard time removing the specular term, leaving only the lambert.
If you opt for this, you might find yourself having to do something like this
var myBlackTexture = obtainTextureThatIsBlack()
var myMaterial = new THREE.MeshPhongMaterial({... specularMap: myBlackTexture})
https://github.com/mrdoob/three.js/issues/10808
Summary:
Anything I miss?
You missed the arbitrary three design caveats :) It will remain a mistery why this material exists as is, and why it doesn't just have a flag to flip between vertex/fragment lighting.
The following illustrates the rendering order I would like to obtain for two plane geometries:
http://jsfiddle.net/Axy2F/8/
This works fine under r58 but under r61 the red square is obscured regardless of how I structure the scene graph. I'm unclear whether this is a bug in r61, or whether I was doing things incorrectly in r58, in a way that just happened to work.
Am I right in assuming that behind.add(child) should suffice to have the red square "beneath" the indigo one in the scene graph, and therefore rendered on top of it?
If not, what is the correct way to establish the rendering order by controlling the construction of the scene graph (that works with r61)? I would like to avoid setting renderDepth explicitly. Note that setting rendered.sortObjects to false does not help.
The object that is in front is the object that is closest to the camera. Being a child has nothing to do with it.
Both your objects have position ( 0, 0, 0 ), so they are the same distance from the camera.
This will lead to z-fighting, which is worse with CanvasRenderer than it is with WebGLRenderer.
Change the position of the child to render it in front. For example,
child.position.z = 1;
FYI, r.61 has a different tie-breaker rule than r.58 did. This is why the rendering is different in r.61.
In a three.js project (viewable here) I have 500 cubes, all of the same size and all statically positioned. On each of these cubes, five of the faces always remain the same color; however, the color of the sixth face can be dynamically updated, and this modification occurs across many of the cubes in a single frame and also occurs across most frames.
I've been able to implement this scene several different ways, but I have not been completely satisfied with the performance of anything I've tried. I know I must not have hit upon the right technique yet or maybe I'm not implementing one quite right. From a performance standpoint, what is the best way to change the color of these cube faces while maintaining independence across each of the cubes?
Here is what I have tried so far:
Create 500 individual CubeGeometry and Mesh instances. Change the color of a geometry face as described in the answer here: Change the colors of a cube's faces. So far this method has performed the best for me, but 500 identical geometries seems less than ideal, especially because I'm not able to achieve a regular 60fps with a good GPU. Rendering takes about 11-20ms here.
Create one CubeGeometry and use it across 500 Mesh instances. Create an array of MeshBasicMaterials to create a MeshFaceMaterial for each Mesh. Five of the MeshBasicMaterial instances are the same, representing the five statically colored sides of each cube. Create a unique MeshBasicMaterial to add to the MeshFaceMaterial for each Mesh. Update the color of this unique material with thisMesh.material.materials[3].uniforms.diffuse.value.copy(newColor). This method renders quite slower than the first method, 90-110ms, which seems surprising to me. Maybe it's because 500 cubes with 6 materials each = 3000 materials to process???
Any advice you can offer would be much appreciated!
I discovered that three.js performs a WebGL draw for each mesh in your scene, and this is what was really hurting my performance. I looked into yaku's suggestion of using BufferGeometry, which I'm sure would be a great route, but using BufferGeometry appears to be relatively difficult unless you have a good amount of experience with WebGL/OpenGL.
However, I came across an alternative solution that was incredibly effective. I still created individual meshes for each of my 500 cubes, but then I used GeometryUtils.merge() to merge each of those meshes into a generic geometry to represent the entire group of cubes. I then used that group geometry to create a group mesh. An explanation of GeometryUtils.merge() is here.
What's especially nice about this tactic is that you still have access to all the faces that were part of the underlying geometries/meshes that you merge. In my project, this allowed me to still have full control over the face colors that I wanted control over:
// For 500 merged cubes, there will be 3000 faces in the geometry.
// This code will get the fourth face (index 3) of any cube.
_mergedCubesMesh.geometry.faces[(cubeIdx * 6) + 3].color