Using Ogre OctreeSceneManager or Generic Scene Manager - ogre

Did somebody used OctreeSceneManager in Ogre? Is it faster for rendeing then Generic Scene Manager? I use OpenGL rendering system.

(Better late than never ;-) )
Quoting the Ogre Wiki, an OSM (Octree Scene Manager)
Uses an octree to split the scene and performs well for most scenes, except those which are reliant on heavy occlusion.
Now there's one big difference between a GSM (Generic Scene Manager) and an OSM:
The GSM relies on you, the developer, to build a scene node hierarchy while the OSM builds an octree of all scene nodes automatically in the background.
This means if you're going to add all your scene nodes to the root node, the GSM will be much slower1.
The usage of both remains the same, you're still free to order your nodes as you want.
When you do a lot of frustum culling or (ray) scene queries you may want to use the OSM, as it will perform better2 (but just remember to not bunch all your scene nodes together under the root node).
Unless you have a scene which is reliant on heavy occlusion or you have to add all your scene nodes directly under the root node I'd prefer the OSM.
1 - GSM vs. OSM
2 - How to use OSM

Related

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

SceneKit - Occlusion culling

I've played with SceneKit on iOS 8 for quite a while, and recently, I run into a situation in which I need to detect if a node does not appear on the viewport. Occlusion culling might be a possible solution. Therefore, is there any occlusion culling option available from SceneKit, and if not, what are other suggestions that I might wanna try? Thanks!
The isNodeInsideFrustum:withPointOfView:
method tells you if a node is inside the camera's field of view, but it won't tell you whether it's occluded by other scene geometry.
If you need occlusion testing, a frustum test is a good place to start. Once you know a node is in the view frustum, you can do hit tests to see if there are any nodes in between. If the results of a hit test include nodes other than your target, it may be at least partially obscured.
Hit testing won't get you extreme detail (like whether any rendered pixels of one node would be visible behind those of other nodes), but it might be enough for what you need. You can refine the sensitivity of hit testing a bit with the options parameter and by choosing which points to test — e.g. just the center of a target node or the corners of its bounding box. Hit testing has a CPU performance cost, too, so you'll have to find the right tradeoff between the functionality you want and your target frame rate.
SCNView, via the SCNSceneRenderer protocol implements
isNodeInsideFrustum:withPointOfView:
It let you test if a node is visible from a given camera.
https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNSceneRenderer_Protocol/Reference/SCNSceneRenderer.html#//apple_ref/occ/intfm/SCNSceneRenderer/isNodeInsideFrustum:withPointOfView:

performance degrades when adding several worldpoints instances to children

we've successfully implemented a demo project based on the earth examples in the helix 3d toolkit and we are creating an effect where the world rotates by using a timer and AddRotateForce on the camera controller. The problem we are having is that when more than a couple hundred worldpoints added to the HelixViewport3D children collection, the rotation effect slows down to the point of not being usable.
is there a better way to add additional children the the world sphere without affecting performance dramatically?

Will an Octree improve performance if my scene contains less than 200K vertices?

I am finally making the move to OpenGL ES 2.0 and am taking advantage of a VBO to load all of the scene data onto the graphics cards memory. However my scene is only around the 200,000 vertices in size ( and I know it depends on hardware somewhat ) but does anyone think an octree would make any sense in this instance ? ( incidentally because of the view point at least 60% of the scene is visible most of the time ) Clearly I am trying to avoid having to implementing an Octree at such an early stage of my GLSL coding life !
There is no need to be worried about optimization and performance if the App you are coding is for learning purpose only. But given your question, apparently you intend to make a commercial App.
Only using VBO will not solve problems on performance for you App, specially as you mentioned that you meant it to run on mobile devices. OpenGL ES has an optimized option for drawing called GL_TRIANGLE_STRIP, which is worth particularly for complex geometry.
Also interesting to add up in improving performance is to apply Bump Masking, for the case you have textures in your model. With these two approaches you App will be remarkably improved.
As you mention that your entire scenery is visible all the time, you should also use level of detail (LOD). To implement geometry LOD, you need a different mesh for each LOD that you wish to use, and each level has fewer polygons than the closest one. You can make yourself the geometry for each LOD, or you can also apply some 3D software to make it automatically.
Some tools are free and you can access and use it to automatically perform generic optimization directly on your GLSL ES code, and it is really worth checking.

Efficient way of drawing in OpenGL ES

In my application I draw a lot of cubes through OpenGL ES Api. All the cubes are of same dimensions, only they are located at different coordinates in space. I can think of two ways of drawing them, but I am not sure which is the most efficient one. I am no OpenGL expert, so I decided to ask here.
Method 1, which is what I use now: Since all the cubes are of identical dimensions, I calculate vertex buffer, index buffer, normal buffer and color buffer only once. During a refresh of the scene, I go over all cubes, do bufferData() for same set of buffers and then draw the triangle mesh of the cube using drawElements() call. Since each cube is at different position, I translate the mvMatrix before I draw. bufferData() and drawElements() is executed for each cube. In this method, I probably save a lot of memory, by not calculating the buffers every time. But I am making lot of drawElements() calls.
Method 2 would be: Treat all cubes as set of polygons spread all over the scene. Calculate vertex, index, color, normal buffers for each polygon (actually triangles within the polygons) and push them to graphics card memory in single call to bufferData(). Then draw them with single call to drawElements(). The advantage of this approach is, I do only one bindBuffer and drawElements call. The downside is, I use lot of memory to create the buffers.
My experience with OpenGL is limited enough, to not know which one of the above methods is better from performance point of view.
I am using this in a WebGL app, but it's a generic OpenGL ES question.
I implemented method 2 and it wins by a landslide. The supposed downside of high amount of memory seemed to be only my imagination. In fact the garbage collector got invoked in method 2 only once, while it was invoked for 4-5 times in method 1.
Your OpenGL scenario might be different from mine, but if you reached here in search of performance tips, the lesson from this question is: Identify the parts in your scene that don't change frequently. No matter how big they are, put them in single buffer set (VBOs) and upload to graphics memory minimum number of times. That's how VBOs are meant to be used. The memory bandwidth between client (i.e. your app) and graphics card is precious and you don't want to consume it often without reason.
Read the section "Vertex Buffer Objects" in Ch. 6 of "OpenGL ES 2.0 Programming Guide" to understand how they are supposed to be used. http://opengles-book.com/
I know that this question is already answered, but I think it's worth pointing out the Google IO presentation about WebGL optimization:
http://www.youtube.com/watch?v=rfQ8rKGTVlg
They cover, essentially, this exact same issue (lot's of identical shapes with different colors/positions) and talk about some great ways to optimize such a scene (and theirs is dynamic too!)
I propose following approach:
On load:
Generate coordinates buffer (for one cube) and load it into VBO (gl.glGenBuffers, gl.glBindBuffer)
On draw:
Bind buffer (gl.glBindBuffer)
Draw each cell (loop)
2.1. Move current position to center of current cube (gl.glTranslatef(position.x, position.y, position.z)
2.2. Draw current cube (gl.glDrawArrays)
2.3. Move position back (gl.glTranslatef(-position.x, -position.y, -position.z))

Resources