What kind of WebGL optimization couldn't be done using Three.js? - three.js

I'm learned a lot of GLSL shaders through Three.js.
By this, I skipped learning the API of WebGL itself.
Along the way, I still gain a various understanding of WebGL concepts.
However, I'm curious if could gain more benefits from going deeper by learning WebGL API itself?
In other words, Is there any significant room for improvement or optimization if I moved from Three.js to pure WebGL?
To be more specific on what I needed to optimize.
I'm currently creating a simple raytracing from fragment shaders on Three.js, Multiple renders pass along the render pipeline.
But I also want to know the benefit of pure WebGL rather than Three.js in general use cases.

First, I never used Three.js, so I can't comment on how it is optimized or not. But, as a general consideration, libraries like Three.js are designed to be generalist and versatile, so, they are, per definition, not optimized. To be clear, They ARE optimized, probably as most as possible, but not optimized for every usages. They do compromises, and implements a lot of intermediary "invisible" mechanisms, routines and objects, to allow multiple scenarios to stay possible through easy way on the "user" side.
So, this depend on your usage of the library. Your shaders may be the real "bottleneck" of your "program", because they are really complexes, and in this case using Three.js or going directly via WebGL will change almost nothing except in the shader loading/compiling part, which is done once at start of the program. In contrary, if you are using a lot of library features, with many objects, many per-frame function calls for transformations, buffer loading, etc... It is possible to increase performances by creating your own optimized routines.
Anyway, learning WebGL stuff will allow you to understands how things work in the background, in certain level however. In my point of view, there is always a benefit to learn how to do without library, you learn the reality behind the library, and being able to create your own library. Once you know WebGL, tou can, for example, more easily jump to OpenGL, then maybe (if you are really brave) Vuklan.

Related

Multi-pass accumulative rendering in three.js

I have written code to render a scene containing lights that work like projectors. But there are a lot of them (more than can be represented with a fixed number of lights in a single shader). So right now I render this by creating a custom shader and rendering the scene once for each light. Each pass the fragment shader determines the contribution for that light and the blender adds in that contribution to the backbuffer. I found this really awkward to set up in three.js. I couldn't find a way of doing multiple passes like this where there were different materials and different geometry required for the different passes. I had to do this by having multiple scenes. The problem there is that I can't have an object3d that is in multiple scenes (Please correct me if I'm wrong). So I need to create duplicates of the objects - one for each scene it is in. This all starts looking really hacky quickly. It's all so special that it seems to be incompatible with various three.js framework features such as VR Rendering. Each light requires shadowing, but I don't have memory for a shadow buffer for each light, so it alternates between rendering the shadow buffer for the light, then the accumulation phase for that light, then the shadow buffer for the next light, then the accumulator the the next light, etc.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working, each time forgoing yet another three.js framework feature that doesn't working properly in conjunction with my multi-pass technique. But it doesn't seem like what I'm doing is so out of the ordinary.
My main surprise is that I can't figure out a way to set up a multi-pass scene that does this back and forth rendering and accumulating. And my second surprise is that the Object3D's that I create don't like being added to multiple scenes a the same time, so I have to create duplicates of each object for each scene it wants to be in, in order to keep their states from interfering with each other.
So is there a way of rendering this kind of multi-pass accumulative scene in a better way? Again, I would describe it as a scene with > the max number of lights allows in a single shader pass so their contributions need to be alternatively rendered (shadow buffers) and then additively accumulated in multiple passes. The lights work like typical movie projectors that project an image (as opposed to being a uniform solid color light source).
How can I do multi-pass rendering like this and still take advantage of good framework stuff like stereo rendering for VR and automatic shadow buffer creation?
Here's a simplified snippet that demonstrates the scenes that are involved:
renderer.render(this.environment.scene, camera, null);
for (let i = 0, ii = this.projectors.length; i < ii; ++i) {
let projector = this.projectors[i];
renderer.setClearColor(0x000000, 1);
renderer.clearTarget(this.shadowRenderTarget, true, true, false);
renderer.render(projector.object3D.depthScene, projector.object3D.depthCamera, this.shadowRenderTarget);
renderer.render(projector.object3D.scene, camera);
}
renderer.render(this.foreground.scene, camera, null);
There is a scene that renders lighting from the environment (done with normal lighting) then there is a scene per projector that computes the shadow map for the projector and then adds in the light contribution from each projector, then there is a "foreground" scene with overlays and UI stuff in it.
Is there a more "three.js" way?
Unfortunately i think the answer is no.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working,
and welcome to the world of three.js development :)
scene graph
You cannot have a node belong to multiple parents. I believe three also does not allow you to do this:
const myPos = new THREE.Vector3()
myMesh_layer0.position = myPos
myMesh_layer1.position = myPos
It wont work with eulers, quaternions or a matrix.
Managing the matrix updates in multiple scenes would be tricky as well.
the three.js way
There is no way to go about the "hack upon hack" unless you start hacking the core.
Notice that it's 2018 but the official way of including three.js into your web app is through <src> tags.
This is a great example of where it would probably be a better idea not to do things the three.js way but the modern javascript way ie use imports, npm installs etc.
Three.js also does not have a robust core that allows you to be flexible with code around it. It's quite obfuscated and conflated with a limited number of hooks exposed that would allow you to write effects such as you want.
Three is often conflated with it's examples, if you pick a random one, it will be written in a three.js way, but far from what best javascript/coding practices are, at least today.
You'll often find large monolithic files, that would benefit from being broken up.
I believe it's still impossible to import the examples as modules.
Look at the material extensions examples and consider if you would want to apply that pattern in your project.
You can probably encounter more pain points, but this is enough to illustrate that the three.js way may not always be desirable.
remedies
Are few and far between. I've spent more than a year trying to push the onBeforeRender and onAfterRender hooks. It seems useful and allowed for some refactors, but another feature had to be nuked first.
The other feature was iterated on during the course of that year and only addressed a single example, until it was made obvious that onBeforeRender would address both the example, and allow for much more.
This unfortunately also seems to be the three.js way. Since the base is so big and includes so many examples, it's more likely that someone would try to optimize a single example, then try to find a common pattern for refactoring a whole bunch of them.
You could go and file an issue on github, but it would be very hard to argue for something as generic as this. You'd most likely have to write some code as a proposal.
This can become taxing quite quick, because it can be rejected, ignored, or you could be asked to provide examples or refactor existing ones.
You mentioned your hacks failing to work with various three's features, like VR. This i think is a problem with three, VR has been the focus of development for the past couple of years at least, but without ever addressing the core issues.
The good news is, three is more modular than it was ever before, so you can fork the project and tweak the parts you need. The issues with three than may move to a higher level, if you find some coupling in the renderer for example that makes it hard to sync your fork, it would be easier to explain than the whole goal of your particular project.

How much do I NEED to know to make a 2D game engine?

I rendered a sprite. After the object is rendered, I can manipulate the points in the object to add animation. Adding a large amount of variables will tell how the rendered object interacts with other objects. A 2d game running with sprites would not need lighting or textures, only the sprites and variables. Is this really all I need to know to make a game engine? Is manipulating various variables, and rendering sprites all I need to know? I feel like it is too easy that way. And I also don't know where and how can I learn everything else about Java?
It depends on what you consider an "engine." Virtually any boiler-plate code or core framework of a game can be called an engine in the most literal, technical sense. What you described could very well be a suitable engine for the purposes you need it for - until you find out that it isn't.
It is likely that these fundamental skills will allow you to make a simple game, and it will be a great learning experience for you in expanding your knowledge. However, you will undoubtedly find that there is more to learn as you progress.
So, in summary, it will probably not be enough for a very complex game, but it is nonetheless where you should start anyway, and use it to learn in baby steps.
Manipulating variables is all you need for any code. Even rendering sprites is loading and modifying variables. The thing you need to know is which variables you want to add/manipulate.
To make a game engine you need to first consider what you want it to do, and plan out what variables you need and functions you want it to perform. Once you have this you can begin implementing it.
There are quite a few game engines around however, and re-inventing the wheel is never advised. If you want a 2d game engine to look at, Slick2D is fairly good for java and will have all the stuff you want for a basic 2D game.

Will an Octree improve performance if my scene contains less than 200K vertices?

I am finally making the move to OpenGL ES 2.0 and am taking advantage of a VBO to load all of the scene data onto the graphics cards memory. However my scene is only around the 200,000 vertices in size ( and I know it depends on hardware somewhat ) but does anyone think an octree would make any sense in this instance ? ( incidentally because of the view point at least 60% of the scene is visible most of the time ) Clearly I am trying to avoid having to implementing an Octree at such an early stage of my GLSL coding life !
There is no need to be worried about optimization and performance if the App you are coding is for learning purpose only. But given your question, apparently you intend to make a commercial App.
Only using VBO will not solve problems on performance for you App, specially as you mentioned that you meant it to run on mobile devices. OpenGL ES has an optimized option for drawing called GL_TRIANGLE_STRIP, which is worth particularly for complex geometry.
Also interesting to add up in improving performance is to apply Bump Masking, for the case you have textures in your model. With these two approaches you App will be remarkably improved.
As you mention that your entire scenery is visible all the time, you should also use level of detail (LOD). To implement geometry LOD, you need a different mesh for each LOD that you wish to use, and each level has fewer polygons than the closest one. You can make yourself the geometry for each LOD, or you can also apply some 3D software to make it automatically.
Some tools are free and you can access and use it to automatically perform generic optimization directly on your GLSL ES code, and it is really worth checking.

Cocos2d 2.0 and OpenGL analyizer suggestions

I have analyzed my game running OpenGL Analyzer on XCode. I am using Cococs2d 2.0 as static library in my game and wonder whether any of the following suggestions will improve my performance. I have read some post in other forums saying that I should not worry about this but as I do have some performance issues I would like to understand if those suggestion will be likely to improve them.
Suggestions:
Overview:
Thinking:
In particular I refer to the suggestion where it says:
"reccomended using VAO and VBO"
Then I wonder also why there are "Many small batch draw calls". I am using a spritebatch node and this should avoid this issue.
Also the other suggestions seems to make sense, but those are the most "frequent" ones so would like to start analyzing those.
A "small batch draw call" is anything with fewer than n-many vertices. I am not sure the exact threshold used, but it is probably on the order of 100-200. What spritebatches really do is eliminate the need to split your draw calls up multiple times in order to switch bound textures, this does not automatically imply that each draw call is going to have more than 100 (or whatever n is defined as in this context) vertices; it is a strong possibility, but not necessary.
I would be far more concerned about non-VBO draw calls and not using VAOs to be honest, especially if you want your code to be forward-compatible.
The "Logical Buffer Load" and "Mipmapping Usage" warnings are very likely related; probably both having to do with FBOs. One of them is related to not using glClear (...) properly and the other is related to using a texture that does not have mipmaps.
Regarding logical buffer loads, you should look into GL_EXT_discard_framebuffer, clearing the framebuffer this way is a really healthy optimization strategy for Tile-Based Deferred Rendering GPUs (such as the ones used by all iOS devices).
As for the mipmap usage warning, I believe this is being triggered because you are drawing into an FBO texture and then applying that texture using a mipmap filter mode. The mip-chain/pyramid for drawn FBOs has to be built manually using glGenerateMipamp (...).
If you can point me to some individual lines that trigger these warnings, I would be happy to explain them in further detail for you.

Canvas 2d context or WebGL for 2D game

I'm planning on writing a game, which will use a lot of sprites and images. At first I tried EaselJS but playing some other canvas-based games I realized it's not that fast. And when I saw BananaBread by Mozilla I thought "if WebGL can do 3D so fast, then it can do 2D even faster". So I moved to three.js (using planes and transparent textures, texture offset for sprites).
The question is: is it better? Faster? Most of the WebGL games are 3D so should I use canvas 2D context for 2D and WebGL for 3D? I've also noticed that there are no libraries for WebGL in 2D (except WebGL-2d, but it's quite low level).
Please note that compatibility is not my greatest concern as I'm not planning on releasing anything anytime soon :) .
The short answer is yes. WebGL can be quite a bit more efficient if you use it well. A naive implementation will either yield no benefit or perform worse, so if you're not already familiar with the OpenGL API you may want to stick to canvas for the time being.
A few more detailed notes: WebGL can draw textured quads (sprites) very very fast, but if you need more advanced 2D drawing features such as path tracing you'll want to stick to a 2D canvas as implementing those types of algorithms in WebGL is non-trivial. The nature of your game also makes a difference in your choice. If you only have a few moving items on screen at a time Canvas will be fairly fast and reasonably simple. If you're redrawing the entire scene every frame, however, WebGL is better suited to that type of render loop.
My recommendation? If you're just learning both, start with Canvas2D and make your game work with that. Abstract your drawing in a simple manner, such as having a DrawPlayer function rather than ctx.drawImage(playerSprite, ....), and when you reach a point where the game is either functioning and you want it to run faster or the game design dictates that you MUST use a faster drawing method, create an alterate rendering backend for all those abstract functions with WebGL. This gives you the advantages of not getting hung up on rendering tech earlier on (which is ALWAYS a mistake!), let's you focus on gameplay, and if you end up implementing both methods you have a great fallback for non-WebGL browsers like Internet Explorer. Chances are you won't really need the increased speed for a while anyway.
WebGL can be much faster than canvas 2D. See http://blog.tojicode.com/2012/07/sprite-tile-maps-on-gpu.html as one example.
That said, I think you're mostly on your own right now. I don't know of any 2d libraries for WebGL except for maybe PlayN http://code.google.com/p/playn/ though that's in Java and uses the Google Web Toolkit to get converted to JavaScript. I'm also pretty sure it doesn't use the techniques mentioned in that blog post above, although if your game does not use tiles maybe that technique is not useful for you.
three.js is arguably not the library you want if you're planning on 2d.

Resources