I am working on a project which needs rendering of thousands of unique meshes of building models. My objects need to be selectable and they will have their data (other than geometry) attached to them.
Since number of objects/meshes has a big impact on the rendering performance, I merge my objects into a few huge meshes and create "pseudo meshes" which just hold information about the indexes of the mesh's triangles and the other data attached. I am able to render each mesh with their own color and user is able to pick them separately.
However transparency and render order is making things hard for me. I managed to render my scene with transparent and non-transparent objects by merging them to separate meshes.
My problem is not being able to highlight the picked objects effectively. I want the highlighting to be a non-transparent color. If a transparent object is picked I change the associated triangles' color attribute to match my highlight color with opacity of 1.0. But this does not work as I intend to because, since the merged mesh's material is transparent, the render order for it won't change although I set the opacity of its' triangles as 1.0. Result is poorly ordered objects on the screen.
I know I could use MultiMaterial but then number of drawcalls -may or may not- increase enormously if the user selects high number of objects. So I am looking for a better solution that will keep the same performance whatever happens.
Do you have any suggestions on how to solve this problem?
Related
I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.
I've got a scene where I'm drawing(to scale) the earth, moon, and some spacecraft. When the moon is occluded by the earth, instead of disappearing, it is still visible (through the earth).
From my research I found that part of the problem is that the near settings for my camera were much too small, as detailed in the article linked, small values of near cause rounding in z-sorting to get fuddled for very distant objects.
The complexity here is that I need to have fine grain z-indexes for when the camera is zoomed in, to look at a spacecraft (an object with a radius of 61 meters at most, in comparison to the earth, weighing in at r =~ 6.5e+06 meters). In order to make objects on the scale of the moon and earth to render in the correct order, the near has to be at least 100,000 m at which point I cannot look at close objects.
One solution would be to reduce the scale to use kilometers, but I cannot afford to lose that precision, and prefer to use meters.
Any ideas as to how to make very large, distant objects render at the correct z Indices, while retaining scale and ability to zoom into small objects?
My Ideas (which I don't know how to implement):
Change z-buffer to include more values, and higher resolution?
Add distant objects to a "farScene" which is rendered using a "farCamera" which is controlled by the same controls used on a close-up camera?
As per #WestLangley 's answer, the solution is simply to add the optionlogarithmicDepthBuffer: true to the renderer:
this.renderer = new THREE.WebGLRenderer({antialias: true, logarithmicDepthBuffer: true});
Probably that the problem is z-test and not z-precision. this mean: z-test not apply (perhaps because that you render transparent object with alpha blending) or z-test apply with non default testing (e.g. override far instead near).
Try to render the whole scene with simple shader with no transparency in-order to make sure that transparency is not the source of the bug.
to solve the z-order without z-test you should sort the object yourself each frame to determine the order of rendering (from far to close).
I'm looking at the three.js code and notice it interates over all objects while drawing. This in turn then updates the GL context for each object. But if I have a bunch of objects sharing a material this is highly inefficient, since it might be interleaved with other objects.
How can I put my objects in an order to minimize the gl calls? I know which objects share properties, I just don't know how to tell three.js that information.
Update: I modified the three.js code and counted the updates. It is quite wasteful. Given one logical object with two materials, for each one I add to the scene it needs to swap programs twice. So for 100 such objects it will swap 200 times as opposed to the desired 2 swaps!
What is "optimal" is case-specific, so your question is too general to be answered. State changes are not the only issue of concern.
three.js sorts opaque objects from front to back, and transparent ones from back to front. It renders transparent objects last.
If you set
renderer.sortObjects = false;
then objects will be rendered in the order they are added to the scene. Since you know what your objects are, this is your work-around.
You can also merge your geometry, or use BufferGeometry to reduce the number of draw calls.
You can get info about the renderer by inspecting renderer.info in the console (or see https://github.com/spite/rstats). That way, you don't have to hack the source.
three.js r.64
In a three.js project (viewable here) I have 500 cubes, all of the same size and all statically positioned. On each of these cubes, five of the faces always remain the same color; however, the color of the sixth face can be dynamically updated, and this modification occurs across many of the cubes in a single frame and also occurs across most frames.
I've been able to implement this scene several different ways, but I have not been completely satisfied with the performance of anything I've tried. I know I must not have hit upon the right technique yet or maybe I'm not implementing one quite right. From a performance standpoint, what is the best way to change the color of these cube faces while maintaining independence across each of the cubes?
Here is what I have tried so far:
Create 500 individual CubeGeometry and Mesh instances. Change the color of a geometry face as described in the answer here: Change the colors of a cube's faces. So far this method has performed the best for me, but 500 identical geometries seems less than ideal, especially because I'm not able to achieve a regular 60fps with a good GPU. Rendering takes about 11-20ms here.
Create one CubeGeometry and use it across 500 Mesh instances. Create an array of MeshBasicMaterials to create a MeshFaceMaterial for each Mesh. Five of the MeshBasicMaterial instances are the same, representing the five statically colored sides of each cube. Create a unique MeshBasicMaterial to add to the MeshFaceMaterial for each Mesh. Update the color of this unique material with thisMesh.material.materials[3].uniforms.diffuse.value.copy(newColor). This method renders quite slower than the first method, 90-110ms, which seems surprising to me. Maybe it's because 500 cubes with 6 materials each = 3000 materials to process???
Any advice you can offer would be much appreciated!
I discovered that three.js performs a WebGL draw for each mesh in your scene, and this is what was really hurting my performance. I looked into yaku's suggestion of using BufferGeometry, which I'm sure would be a great route, but using BufferGeometry appears to be relatively difficult unless you have a good amount of experience with WebGL/OpenGL.
However, I came across an alternative solution that was incredibly effective. I still created individual meshes for each of my 500 cubes, but then I used GeometryUtils.merge() to merge each of those meshes into a generic geometry to represent the entire group of cubes. I then used that group geometry to create a group mesh. An explanation of GeometryUtils.merge() is here.
What's especially nice about this tactic is that you still have access to all the faces that were part of the underlying geometries/meshes that you merge. In my project, this allowed me to still have full control over the face colors that I wanted control over:
// For 500 merged cubes, there will be 3000 faces in the geometry.
// This code will get the fourth face (index 3) of any cube.
_mergedCubesMesh.geometry.faces[(cubeIdx * 6) + 3].color
Here is my test application inthree.js- http://zheden.elitno.net/
There are 2 cubes - green is the upper one. If you uncheck "Cube 2" (yellow inner cube), it becomes invisible. And when you rotate then camera and after rotating check back "Cube 2", it becomes outer. It reproduces not with all angles of rotation.
Adding "renderer.sortObjects = false" fixed the problem. But could you please explain me the reason of this behavior? Renderer sort objects based on their positions. Why order of rendering is changed when some object is transparent? It's position is not changed.
Is this related to Transparent textures behaviour in WebGL ?
Thanks,
Zhenya
There are no transparent objects in your demo, only opaque ones. You are changing the visibility.
WebGLRenderer sorts objects based on their distance from the camera, and renders objects in the sorted order. It renders opaque objects from front to back.
The rendering order can change due to how the sorting algorithm breaks ties when two objects are the same distance from the camera.
However, the render order is not necessarily changing when you toggle the visibility off and then on again. What can be changing is the distance to the depth buffer in the least significant digits due to roundoff when you move the camera. Hence, sometimes the second object renders, and sometimes it does not.
You have two cubes of exactly the same size and orientation in exactly the same location. Do not do that. It can cause you all sorts of rendering problems -- the most common of which, is flickering.
three.js r.58