I have written code to render a scene containing lights that work like projectors. But there are a lot of them (more than can be represented with a fixed number of lights in a single shader). So right now I render this by creating a custom shader and rendering the scene once for each light. Each pass the fragment shader determines the contribution for that light and the blender adds in that contribution to the backbuffer. I found this really awkward to set up in three.js. I couldn't find a way of doing multiple passes like this where there were different materials and different geometry required for the different passes. I had to do this by having multiple scenes. The problem there is that I can't have an object3d that is in multiple scenes (Please correct me if I'm wrong). So I need to create duplicates of the objects - one for each scene it is in. This all starts looking really hacky quickly. It's all so special that it seems to be incompatible with various three.js framework features such as VR Rendering. Each light requires shadowing, but I don't have memory for a shadow buffer for each light, so it alternates between rendering the shadow buffer for the light, then the accumulation phase for that light, then the shadow buffer for the next light, then the accumulator the the next light, etc.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working, each time forgoing yet another three.js framework feature that doesn't working properly in conjunction with my multi-pass technique. But it doesn't seem like what I'm doing is so out of the ordinary.
My main surprise is that I can't figure out a way to set up a multi-pass scene that does this back and forth rendering and accumulating. And my second surprise is that the Object3D's that I create don't like being added to multiple scenes a the same time, so I have to create duplicates of each object for each scene it wants to be in, in order to keep their states from interfering with each other.
So is there a way of rendering this kind of multi-pass accumulative scene in a better way? Again, I would describe it as a scene with > the max number of lights allows in a single shader pass so their contributions need to be alternatively rendered (shadow buffers) and then additively accumulated in multiple passes. The lights work like typical movie projectors that project an image (as opposed to being a uniform solid color light source).
How can I do multi-pass rendering like this and still take advantage of good framework stuff like stereo rendering for VR and automatic shadow buffer creation?
Here's a simplified snippet that demonstrates the scenes that are involved:
renderer.render(this.environment.scene, camera, null);
for (let i = 0, ii = this.projectors.length; i < ii; ++i) {
let projector = this.projectors[i];
renderer.setClearColor(0x000000, 1);
renderer.clearTarget(this.shadowRenderTarget, true, true, false);
renderer.render(projector.object3D.depthScene, projector.object3D.depthCamera, this.shadowRenderTarget);
renderer.render(projector.object3D.scene, camera);
}
renderer.render(this.foreground.scene, camera, null);
There is a scene that renders lighting from the environment (done with normal lighting) then there is a scene per projector that computes the shadow map for the projector and then adds in the light contribution from each projector, then there is a "foreground" scene with overlays and UI stuff in it.
Is there a more "three.js" way?
Unfortunately i think the answer is no.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working,
and welcome to the world of three.js development :)
scene graph
You cannot have a node belong to multiple parents. I believe three also does not allow you to do this:
const myPos = new THREE.Vector3()
myMesh_layer0.position = myPos
myMesh_layer1.position = myPos
It wont work with eulers, quaternions or a matrix.
Managing the matrix updates in multiple scenes would be tricky as well.
the three.js way
There is no way to go about the "hack upon hack" unless you start hacking the core.
Notice that it's 2018 but the official way of including three.js into your web app is through <src> tags.
This is a great example of where it would probably be a better idea not to do things the three.js way but the modern javascript way ie use imports, npm installs etc.
Three.js also does not have a robust core that allows you to be flexible with code around it. It's quite obfuscated and conflated with a limited number of hooks exposed that would allow you to write effects such as you want.
Three is often conflated with it's examples, if you pick a random one, it will be written in a three.js way, but far from what best javascript/coding practices are, at least today.
You'll often find large monolithic files, that would benefit from being broken up.
I believe it's still impossible to import the examples as modules.
Look at the material extensions examples and consider if you would want to apply that pattern in your project.
You can probably encounter more pain points, but this is enough to illustrate that the three.js way may not always be desirable.
remedies
Are few and far between. I've spent more than a year trying to push the onBeforeRender and onAfterRender hooks. It seems useful and allowed for some refactors, but another feature had to be nuked first.
The other feature was iterated on during the course of that year and only addressed a single example, until it was made obvious that onBeforeRender would address both the example, and allow for much more.
This unfortunately also seems to be the three.js way. Since the base is so big and includes so many examples, it's more likely that someone would try to optimize a single example, then try to find a common pattern for refactoring a whole bunch of them.
You could go and file an issue on github, but it would be very hard to argue for something as generic as this. You'd most likely have to write some code as a proposal.
This can become taxing quite quick, because it can be rejected, ignored, or you could be asked to provide examples or refactor existing ones.
You mentioned your hacks failing to work with various three's features, like VR. This i think is a problem with three, VR has been the focus of development for the past couple of years at least, but without ever addressing the core issues.
The good news is, three is more modular than it was ever before, so you can fork the project and tweak the parts you need. The issues with three than may move to a higher level, if you find some coupling in the renderer for example that makes it hard to sync your fork, it would be easier to explain than the whole goal of your particular project.
Related
I'm working on a project that uses a lot of lines and marks with the camera at a very low angle (almost at ground level). I'm also using an outline effect to highlight selected objects in different ways (selection, collisions, etc.).
Native AA is lost when using postprocessing effects (eg: outline effect). This causes jagged lines on screen, more noticeable when the camera is closer to ground level.
I have created this jsFiddle to illustrate the issue (using ThreeJS r111):
https://jsfiddle.net/Eketol/s143behw/
Just press mouse/touch the 3D scene to render without postprocessing effects and release mouse/touch to render with it again.
Some posts suggest using an FXAAShader pass will solve it, but I haven't had any luck with it. Instead, I get some artifacts on the scene and in some cases the whole image is broken.
So far my options are:
Get a workaround to get the outline effects without postprocessing effects:
The ones I've seen around (eg: https://stemkoski.github.io/Three.js/Outline.html) duplicate the meshes/geometries to render a bigger version with a solid color applied behind the main object. While it may be ok with basic static geometries like a cube, it doesn't seem an efficient solution when using complex 3D objects that you need to drag around (objects with multiple meshes).
Increasing the renderer.pixelratio value to get a bigger frame size: This is not an option for me. On my test it doesn't make a big difference and also makes the rendering slower.
Try to get FXAAShader working without artifacts: As I said, it doesn't seem to fix the issue as well as the native AA does (and it is not as compatible). Maybe I'm not using it correctly, but I just get antialiased jagged lines.
Question 1: It may sound silly, but I though there would be an easy way to send the antialiased image directly to the composer or at least there could be some extra pass to do this, keeping the native AA. Is this possible?
Question 2: Maybe using Scene.onAfterRender to get the image with native AA and then blending the outline effect somehow?
Question 3: Googling around, it seems this issue also affects to Unity. In this post, it says this won't be an problem with WebGL2. Does this also apply to ThreeJS?
I've a project that actually loads and render 3 different scenes in three different area of the site.
At the moment, every time i need to change scene, i remove the canvas (an all Threejs listeners and iterators) and render the new scene from the scratch.
Is it a good practice or there are performance benefits from creating a unique scene with it's renderer, and loading inside it the different meshes, lights and cameras from the three different scenes?
Anyone have already test a similar scenario?
Are you looking for something like this?
If you are looking for showing one scene at time, I think the best option is to create the scenes and render only the selected one. See THREE.WebGLRenderer.render().
Performance wise it would probably be slightly better to use a single renderer-instance and different scene-/camera-objects passed to it's render()-call and (if necessary) relocating the renderer.domElement on your site. That way you don't need to have multiple canvases and rendering-contexts around and the renderer may be able to cache a few webgl-related things. This method also has the possibility to reuse things like materials across scenes.
However, the performance-difference should be minimal and only (or at least mostly) initialisation-cost in a range of max. 100ms (depends of course on what exactly you are doing, but this would be my guess).
So if it feels better to have everything isolated (and there might be good reasons for that as well depending on how your app is built): Just keep it that way and measure if it has any negative impact (my guess: it probably won't).
We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?
I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.
I am finally making the move to OpenGL ES 2.0 and am taking advantage of a VBO to load all of the scene data onto the graphics cards memory. However my scene is only around the 200,000 vertices in size ( and I know it depends on hardware somewhat ) but does anyone think an octree would make any sense in this instance ? ( incidentally because of the view point at least 60% of the scene is visible most of the time ) Clearly I am trying to avoid having to implementing an Octree at such an early stage of my GLSL coding life !
There is no need to be worried about optimization and performance if the App you are coding is for learning purpose only. But given your question, apparently you intend to make a commercial App.
Only using VBO will not solve problems on performance for you App, specially as you mentioned that you meant it to run on mobile devices. OpenGL ES has an optimized option for drawing called GL_TRIANGLE_STRIP, which is worth particularly for complex geometry.
Also interesting to add up in improving performance is to apply Bump Masking, for the case you have textures in your model. With these two approaches you App will be remarkably improved.
As you mention that your entire scenery is visible all the time, you should also use level of detail (LOD). To implement geometry LOD, you need a different mesh for each LOD that you wish to use, and each level has fewer polygons than the closest one. You can make yourself the geometry for each LOD, or you can also apply some 3D software to make it automatically.
Some tools are free and you can access and use it to automatically perform generic optimization directly on your GLSL ES code, and it is really worth checking.