Is it possible manually sort objects in threejs? - sorting

Threejs has a parameter called renderOrder for objects, which allows threejs to sort them if the sortObjects = false for the renderer. Before sorting threejs divides the objects into two bins (a transparency and opaque bin). Each bin the objects are sorted. The opaque gets rendered first and there after the transparency.
If you set sortObjects to false then you are relying on the depthbuffer.
My question is there a way I can manually sort the objects? I would like to have all my objects in one bin since I am sorting from back to front and do not need two bins (which in my case messes things up). I do not use the depthbuffer.

Related

Order independent transparency and mixed opaque and translucent object hierarchies

We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?

Obtain indices from .obj

I dont know opengl-es. but I must use features of .obj 3d model in my android app.
In .obj file, I can find Vertices, Texcoords and Normals. but there is no Indices and instead there are face elements.
Can anyone clearly explain me how to obtain indices from .obj file?
Possibly a little hard to write clearly because what OBJ considers a vertex is not what OpenGL considers a vertex. Let's find out...
An OBJ file establishes lists of obj-vertices (v), texture coordinates (vt), normals (n). You probably don't ever want to hand these to OpenGL (but skip to the end for the caveat). They're just a way for your loading code to establish the meaning of v1, vt3, etc.
The only place that openGL-vertices are specified is within the f statement. E.g. v1/vt1/vn1 means "the OpenGL vertex with location, texture coordinate and normal as specified back in the list".
So a workable solution to loading is, in pseudocode:
instantiate an empty hash map from v/vt/vn triples to opengl-vertex indices, an empty opengl-vertex list, and an empty list of indices for later supplication to glDrawElements;
for each triple in the OBJ file:
look into the hash map to determine whether it is already in the opengl-vertex list and, if so, get the index and add it to your elements list;
if not then assign the next available index to the triple (so, this is just an incrementing number), put that into the hash map and the elements list, combine the triple and insert it into the opengl-vertex list.
You can try to do better than that by attempting to minimise the potential highly random access it implies to your opengl-vertex list at drawing time, but don't prematurely optimise.
Caveat:
If your GPU supports vertex texture fetch (i.e. texture sampling within the vertex shader) then you could just supply the triples directly to OpenGL, having accumulated the obj-vertices, etc, into texture maps, and do the indirect lookup in your vertex shader. With vertex texture fetch, textures really just become random access 2d arrays. However, many Android GPUs don't support vertex texture fetch (even if they support ES 3 which ostensibly makes it a requirement, as it allows an implementation to specify that it supports a maximum of zero samplers).

"Overlapping" galaxies on GalSim

I am trying to draw two galaxies progressively closer and compare the result with just one galaxy. It seems that the method in Demo7 overrides one of the images if I make their bounds overlap. Is there any way I can "add" the two galaxies? In spherical coordinates, I would be placing the them in similar "angular" positions (theta and phi) but different "distance" (r) positions. I'm guessing that would involve an r-coordinate "distance" parameter (because galaxies can't be on top of each other)... I tried looking at the Position class on GalSim to no avail...
The functionality that you need is provided by the "add_to_image" keyword argument of the drawImage() method. By default, drawImage() first zeros out any pixels that are going to be drawn into (i.e., "add_to_image" is False by default). However, if you call drawImage() with add_to_image=True, then the new flux gets added to whatever is there, which is necessary for drawing overlapping galaxy light profiles.
The docstring for drawImage() has more information about this keyword argument.

How to implement batches using webgl?

I am working on a small game using webgl. Within this game I have some kind of forest which consists out of many (100+) tree objects. Because I only have a few different tree models, I rotate and scale these models in a different way before I display them.
At the moment I loop over all trees to display them:
for (var tree in trees) {
tree.display();
}
While the display() method of tree looks like:
display : function() { // tree
this.treeModel.setRotation(this.rotation);
this.treeModel.setScale(this.scale);
this.treeModel.setPosition(this.position);
this.treeModel.display();
}
Many tree objects share the same treeModel object, so I have to set rotation/scale/position of the model everytime before I display it. The rotation/scale/position values are different for every tree.
The display method of treeModel does all the gl stuff:
display : function() { // treeModel
// bind texture
// set uniforms for projection/modelview matrix based on rotation/scale/position
// bind buffers
// drawArrays
}
All tree models use the same shader but can use different textures.
Because a single tree model consists only out of a few triangles I want to combine all trees into one VBO and display the whole forest with one drawArrays() call.
Some assumptions to make talking about numbers easier:
There are 250 trees to display
There are 5 different tree models
Every tree model has 50 triangles
Questions I have:
At the moment I have 5 buffers that are 50 * 3 * 8 (position + normal + texCoord) * floatSize bytes large. When i want to display all trees with one vbo i would have a buffer with 250 * 50 * 3 * 8 * floatSize byte size. I think I can't use an index buffer because I have different position values for every tree (computed out of the position value of the tree model and the tree position/scale/rotation). Is this correct or is there still a way I can use index buffers to reduce the buffer size at least a bit? Maybe there are other ways to optimize this?
How to handle different textures of the tree models? I can bind all textures to different texture units but how can I decide within the shader which texture should be used for the fragment that is currently displayed?
When I want to add a new tree (or any other kind of object) to this buffer at runtime: Do I have to create a new buffer and copy the content? I think new values can't be added by using glMapBuffer. Is this correct?
Index element buffers can only reach over attributes that are equal to or below 65535 in length, so you need to use drawArrays instead. It's usually not a big loss.
You can add trees to the end of the buffers using GL.bufferSubData.
If your textures are in reasonable sized (like 128x128 or 256x256), you can probably merge them into one big texture and handle the whole thing with the UV-coords. If not, you can add another attribute saying what texture the vertex belongs to and have a condition in the vertex shader, alternatively an array of sampler2Ds (not sure it works, never tried it). Remember that conditions in shaders are pretty slow.
If you decide to stick to your current solution, make sure to sort the trees so the once using the same textures are rendered after each other - keeping state switching down is essential, always.
A few thoughts:
Once you plant a tree in your world, do you ever modify it? Will it animate at all? Or is it just static geometry? If it's truly static, you could always build a single buffer with several copies of each tree. As you append trees, first apply (in Javascript) that instance's world transform to the vertices. If using triangle strips, you can link trees together using degenerate polygons.
You could roll your own pseudo-instanced drawing:
Encode an instance ID in the array buffer. Just set this to the same value for all vertices that are part of the same tree instance. I seem to recall that you can't have non-floaty vertex attributes in ES GLSL (maybe that's a Chrome limitation), so you will need to bring it in as a float but use it as an int. Since it's coming in as a float, you will have to deal with the fact that it's interpolated across your triangle, and so the value will have minor fluctuations - but simply rounding to the nearest integer fixes that right up.
Use a separate texture (which I will call the data texture) to encode all the per-instance information. In your vertex shader, look at the instance ID of the current vertex and use that to compute a texture coordinate in the data texture. Pull out whatever you need to transform the current vertex, and apply it. I think this is called a "dependent texture read", which is generally frowned upon because it can cause performance issues, but it might help you batch your geometry, which can help solve performance issues. If you're interested, you'll have to try it and see what happens.
Hope for an extension to support real instanced drawing.
Your current approach isn't so bad. I'd say: Stick with it until you hit some wall.
50 triangles is already a reasonable batch size for a single drawElements/drawArrays call. It's not optimal, but also not so bad. So for every tree change the paramters like location, texture and maybe shape through uniforms. Then do a draw call for each tree. Also a total of 250 drawElements calls isn't so bad either.
So I'd use one single VBO that contains all the used tree geometry variants. I'd actually split up the trees into building blocks, so that I could recombine them for added variety. And for each tree set appropriate offsets into the VBO before calling drawArrays or drawElements.
Also don't forget that you can do a very cheap field of view culling of each tree.

Performance of using a large number of small VBOs

I wanted to know whether OpenGL VBOs are meant to be used only for large geometry arrays, or whether it makes sense to use them even for small arrays. I have code that positions a variety of geometry relative to other geometry, but some of the "leaf" geometry objects are quite small, 10x10 quad spheres and the like (200 triangles apiece). I may have a very large number of these small leaf objects. I want to be able to have a different transform matrix for each of these small leaf objects. It seems I have 2 options:
Use a separate VBO for each leaf object. I may end up with a large number of VBOs.
I could store data for multiple objects in one VBO, and apply the appropriate transforms myself when changing the data for a given leaf object. This seems strange because part of the point of OpenGL is to efficiently do large numbers of matrix ops in hardware. I would be doing in software something OpenGL is designed to do very well in hardware.
Is having a large number of VBOs inefficient, or should I just go ahead and choose option 1? Are there any better ways to deal with the situation of having lots of small objects? Should I instead be using vertex arrays or anything like that?
The most optimal solution if your data is static and consists of one object would be to use display lists and one VBO for only one mesh. Otherwise the general rule is that you want to avoid doing anything other than rendering in the main draw loop.
If you'll never need to add or remove an object after initialization (or morph any object) then it's probably more efficient to bind a single buffer permanently and change the stride/offset values to render different objects.
If you've only got a base set of geometry that will remain static, use a single VBO for that and separate VBOs for the geometry that can be added/removed/morphed.
If you can drop objects in or remove objects at will, each object should have it's own VBO to make memory management much simpler.
some good info is located at: http://www.opengl.org/wiki/Vertex_Specification_Best_Practices
I think that 200 triangles per one mesh is not so small number and maybe the performance with VBO for each of that meshes will not decrease so much. Unfortunately it is depended on hardware spec.
One huge buffer will no gain huge performance difference... I think that the best option would be to store several (but not all) objects per one VBO.
renderin using one buffer:
there is no problem with that... you simply have one buffer that is bound and then you can use glDrawArrays with different parameters. For instance if one mesh consists of 100 verts, and in th buffer you have 10 of those meshes you can use
glDrawArrays(triangles, 0, 100);
glDrawArrays(triangles, 100, 100);
glDrawArrays(triangles, ..., 100);
glDrawArrays(triangles, 900, 100);
in that way you minimize changing buffers and still you are able to render it quite efficient.
Are those "small" objects the same? do they have the same geometry but different transformations/materials? Because maybe it is worth using "instancing"?

Resources