Three.js scene.remove vs. visible=false - three.js

If I remove() an Object3D from the scene, it won't be rendered but it will remain in memory. If I set that object's visible property to false it won't be rendered but it will remain in memory. What's the difference?
Context: I am experiencing performance issues when I have a lot of complex meshes in existence. Only one needs to be visible at any one time. The others are usually hidden with visible = false.

Well, the difference is that when you remove the object in the scene it is removed from the scene, i.e. is no longer among the children there. Whereas when it's just set to invisible, it still stays in the scene data structure and can be used in calculations for example to rotate some other object towards it.
But yes for the rendering there is no difference in the end, both are ways to omit that object from drawing.
A practically useful difference is that if you need to hide & show objects a lot, setting the visible flag is quick and light whereas manipulating the scene is a bit more complex heavier operation. So to temporarily hide and object that you know you'll show again soon, is a good idea to configure the visibility flag, and to remove an object that you might not bring back anymore better remove it from the scene. Or indeed if you need it for calculations like rotating something towards it (and it perhaps moves in some hierarchy itself).
In order to actually free memory, you need to remove the object from the scene but also dispose the data it is using like shown in e.g. freeing memory in three.js

Related

Order independent transparency and mixed opaque and translucent object hierarchies

We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?

Unity - best way to change the shape of the gameobject at runtime

I want create a simple 2D game in Unity3D, in which one of the entities has to grow and shrink. This is done by merging simple shapes.
A rough example in the picture below just to show what I mean:
It grows by adding components and shrinks by removing them. There will be a lot of entities on the screen so the performance is very important.
Is it possible to change dynamically the shape of one gameobject? If not, which of the following solutions is more suitable and why?
Constantly add new gameobjects (shapes) to the previous one and then remove them?
Create an animation. In this case is it possible to change the animation speed at runtime so for example first it grows faster and then grows slower or shrinks? My issue is whether the change of speed will apply to the whole loop of the animation or is it possible to apply it in the middle (so that the speed of shrinking and growing is different)? I would prefer the latter to happen.
If you have any other suggestions I'd be glad to hear them.
Create an empty game object and add all of these small pieces as its child. Then you can disable/enable whichever you want with gameObject.SetActive(false/true);
Depends what is "lots of objects" and what is the target platform.
You can easily have hundreds of sprites on screen in any case, especially if they get batched : http://docs.unity3d.com/Manual/DrawCallBatching.html
And there is big performance benefit to use object pooling,
instead of instancing new objects and destroying old ones.
Having hundreds of animated objects would cause slowdown,
Mecanim Animator seems to be slower than the original Animation component.
Other options:
- Create custom mesh that you modify at runtime (by adding/removeing vertices to it), this also allows to freely modify the shapes (by moving the vertices) : http://docs.unity3d.com/ScriptReference/Mesh.html

Bullet Physics memory use on large changing worlds with many triangle meshes

Might the btDiscreteDynamicsWorld recreate/expand it's own Octree/SpatialHash from the whole DynamicsWorld and keep expanding it as I increase the distance of added objects from (0,0,0)? Adding a new top level to the octree for example and therefore using more memory and never shrinking it back again?
I use a btDiscreteDynamicsWorld containing multiple objects of btBvhTriangleMeshShape for static collision. Also some other primitive Entities (btBoxes and btCylinders) colliding with the static triangle meshes.
I move the "point of action" constantly away from the origin (0,0,0) of the DynamicsWorld, removing old TriangleMeshes and old primitive Entities, while also adding new ones in the same manner. (procedurally generated world)
When I keep moving along the x-axis the memory use increases constantly. But when I stop at some point and move back the way I came, deleting and adding new Shapes again, the memory use stays constant.
After hunting possible memory leaks with adding and removing (also deleting) the collision objects properly from my heap (like removing MotionStates manually etc.), I still face this behavior.
The technical documentation of bullet is not very detailed and skimming through the source code of bullet did not give me any clues either. API does not show something like a recalcOctree() function.
Can I force a complete recalculation of the internal collision structure without deleting the whole btDynamicsWorld object? Or am I completely on the wrong run here?
Has anyone else experienced increased memory usage in bullet when adding objects far away from (0,0,0) compared to objects near the origin?
I use Bullet 2.79

Are off-stage DisplayObjects in Flash still slowing down my game?

How does Flash deal with elements that are off-stage?
Obviously Flash doesn't actually render them (because they don't appear anywhere on-screen), but is the process of rendering them still existent, slowing down my game as much as it would if the elements were on-screen?
Or does Flash intelligently ignore elements who don't fall into a renderable area?
Should I manually manage removing objects off the DisplayList and adding them back on as the exit and enter the stage, or is this going to be irrelevant?
Yes, they are slowing down your game.
In one of my early experiments I've developed a sidescroller game with many NPCs scattered around the map, not all visible in the same screen. I still had to calculate stuff but they weren't on the screen. The performance was significantly better when I handled their removal off the display list when irrelevant (by simply checking their X in relation to the 'camera'). Again, I'm not talking about additional code and events that may be attached to them, just plain graphical children of a movieclip.
The best practice though, in my experience, is drawing the objects in bitmaps. Of course if you're too deep into your game already this may be irrelevant, but if you have the time to invest, this is one of the best ways to get the most out of AS3 regarding 2D games. I found some of the greatest tutorials regarding bitmaps and AS3 in 8bitrocket
http://www.8bitrocket.com/books/the-essential-guide-to-flash-games/ I can elaborate on the subject if you want, but I think I'm going off topic here.
Even if some display objects are out of the stage area, they are still executed. If they have any animation playing in them, that might slow down the performance.
The question arises, why do we need to keep unused items outside the stage area? if you need to 'cache' the movieClips for faster loading , then load them in a keyframe where the control will never go. for eg. load the display objects which you want to show in frame 1, then put a stop() in the actions panel of the frame, make it a key frame, and in frame 2 load the unused animations. since there is a stop() in frame 1, the control never goes in frame 2, but the display objects are cached.
Or, if you have codes in the unsused displayobjects, and thus need to load them along with the main game components, then, try putting stop() in the frames of the unused display objects so that they don't animate.

Advice for a Cocoa drawing application

I'm new to Cocoa and looking for a little advice for an application from experienced Cocoa-ers. 
I'm building a basic OmniGraffle-style app where objects are drawn/dragged onto a canvas. After the objects are on the canvas, they can be selected to modify their properties (fill color, stroke color/width, etc.), be resized, moved around to a new position, etc.
To get warmed up, I've written a basic drawing app that creates objects (circles, rectangles, etc.) as drawn by the mouse on a custom NSView, adds the objects to an NSArray collection, and renders the contents of the collection into the view. I could continue in this vein, but I'm going to have to add support for detecting object selection, resolving z-indexing, focus highlighting, drag handles, etc. with all the associated rendering. Also, rendering every object on each cycle seems terribly wasteful.
It seems like a better approach would be to drop lightweight view objects onto a canvas that were able to detect mouse events on themselves, draw themselves and their focus rings, and so forth. However, while NSView seems like an object with these properties, I see a lot of chatter on the web about it being a heavyweight component with a lot of baggage. I've stumbled across NSCells and have read up on them, but I'm not sure if they are the right alternative.
Any suggestions? If you can nudge me in the right direction I'd greatly appreciate it.
First rule of optimization: Don't do it first.
A custom NSView per shape sounds about right to me. Whether you'll want different subclasses for different shapes will be up to you; I'd start out with a single generic shape-view class and shapes able to describe themselves as Bézier paths, but don't be too strict about holding to that—change it if it'd make it easier. Just implement it however it makes sense to you.
Then, once you've got it working, profile it. Make as many shapes as you can. Then make more. High-poly-count shapes. Intersections. Fills, strokes, shadows, and gradients. You probably should create a separate document for each stressor. Notice just at the user level what's slow. Then, run your app under Instruments and look into why it's slow.
Maybe views will turn out to be the wrong solution. Don't forget to look into CALayers. But don't rule anything out as slow until you've tried it and measured it.

Resources