Possibly to prioritise drawing of objects in Threejs? - three.js

I am working on a CAD type system using threejs. I have thin objects next to other objects (think thin 2mm metal sheeting fixed to posts on a building measured in metres). When I am zoomed in it all looks fine. The objects do not intersect at all. As I zoom out the objects get smaller and I end up with cases where the post object 'glimmers' (sort of shows through) the metal sheet object as I rotate it around.
I understand it's the small numbers I am working with that is causing this effect. However, is there a way to set a priority such that one object (the metal sheeting) is more important than another object (post) so it doesn't get that sort of effect?

To answer the question from the title, it is possible to prioritize drawing orders with.
myMesh.renderOrder = 5
myOtherMesh.renderOrder = 7
It is then possible to apply different depth effects, turn off the test etc.
Another way is to group objects with, layers. Set the appropriate layer mask on the camera and then render (multiple times).
myMesh.layers.set(5)
camera.layers.set(1)
renderer.render(scene,camera)
camera.layers.set(5)
renderer.render(scene,camera)

This is called z-fighting, where two fragments are so close in the given depth space that their z-values are within the margin of error that their true depths might get inverted.
The easiest way to resolve this is to reduce the scale of your depth buffer. This is controlled by the near and far properties on your camera. You'll need to play with the values to determine what works best for your senario. If you can minimize the distance between the planes, you'll have better luck avoiding z-fighting.
For example, if (as a loose estimate) the bounding sphere of your entire model has a diameter of 100, then the distance between near and far need only be 100. However, their values are set as the distance into camera space. So as you zoom out, and your camera moves further away, you should adjust the values to maintain the minimum distance between them. If your camera is at z = 100, then set near = 50 and far = 150. When you pull your camera back to z = 250, then update near = 200 and far = 300.
Another option is to use the WebGLRenderer.logarithmicDepthBuffer option. (example)
Edit: There is one other cause: the faces of the shapes are actually co-planar. If two triangles are occupying the same space, then you're all but guaranteeing z-fighting.
The simple solution is to move one of the components such that the faces are no longer co-planar. You could also potentially apply a polygonOffset to the sheet metal material, but your use-case doesn't sound like that is appropriate.

Related

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

THREE.JS: Render large, distant objects at correct z-indicies and still zoom into small objects

I've got a scene where I'm drawing(to scale) the earth, moon, and some spacecraft. When the moon is occluded by the earth, instead of disappearing, it is still visible (through the earth).
From my research I found that part of the problem is that the near settings for my camera were much too small, as detailed in the article linked, small values of near cause rounding in z-sorting to get fuddled for very distant objects.
The complexity here is that I need to have fine grain z-indexes for when the camera is zoomed in, to look at a spacecraft (an object with a radius of 61 meters at most, in comparison to the earth, weighing in at r =~ 6.5e+06 meters). In order to make objects on the scale of the moon and earth to render in the correct order, the near has to be at least 100,000 m at which point I cannot look at close objects.
One solution would be to reduce the scale to use kilometers, but I cannot afford to lose that precision, and prefer to use meters.
Any ideas as to how to make very large, distant objects render at the correct z Indices, while retaining scale and ability to zoom into small objects?
My Ideas (which I don't know how to implement):
Change z-buffer to include more values, and higher resolution?
Add distant objects to a "farScene" which is rendered using a "farCamera" which is controlled by the same controls used on a close-up camera?
As per #WestLangley 's answer, the solution is simply to add the optionlogarithmicDepthBuffer: true to the renderer:
this.renderer = new THREE.WebGLRenderer({antialias: true, logarithmicDepthBuffer: true});
Probably that the problem is z-test and not z-precision. this mean: z-test not apply (perhaps because that you render transparent object with alpha blending) or z-test apply with non default testing (e.g. override far instead near).
Try to render the whole scene with simple shader with no transparency in-order to make sure that transparency is not the source of the bug.
to solve the z-order without z-test you should sort the object yourself each frame to determine the order of rendering (from far to close).

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Vertex buffer objects and glutsolidsphere

I have to draw a great collection of spheres in a 3D physical simulation of a "spring-mass" like system.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Is there any method to draw OpenGL spheres in a way faster than glutSolidSphere?
Spheres are self-similar; every sphere is just a scaled version of any other sphere. I see no need to regenerate any geometry. Indeed, I see no need to have more than one sphere at all.
It's simply a matter of providing the proper scaling matrix. I would suggest a sphere of radius one centered at the origin for your display list or buffer object mesh. Then you can just transform it to different locations, using a scale to set the new radius.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
Why are you generating a display list at all, if the geometry you put into is is dynamic. Display lists are meant for static geometry that never or only seldomly changes.
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Actually VBOs are most efficient with static geometry as well. In general you want to keep the number of actual geometry updates as low as possible. In your case the only thing updating are the positions (and maybe the size) of the spheres. This is a prime example for instanced drawing. However this also works well, with updating only a uniform or the transformation matrix and do the call drawing a sphere.
The idea of Vertex Arrays and VBOs is, that you draw a whole batch of geometry with a single call. A sphere would be such a batch.

Resources