How to optimize group with lot of meshes? - three.js

I load OBJ with ObjLoader2. ObjLoader2 converts OBJ to group with a large number of meshes. As my OBJ is really huge, performance is really bad. Is there possibility to increase performance but to keep info about all parts of OBJ because it is needed to keep possibility to select every mesh?

Related

Is it okay to use a single MeshFaceMaterial across several objects in Three.JS?

I am parsing and loading a 3d object file (similar to ColladaLoader in operation). This contains several objects, and many of the individual objects have several materials across different faces. So, I use MeshFaceMaterial. No problems so far.
However, a handful of the objects re-use materials across them. Would it be appropriate to create a single MeshFaceMaterial and use it across all objects? There are about 120 objects per file. My concern is that if I go down this route it may impair performance (e.g. excessive draw calls, or maybe allocation of memory for each material per object?) as the majority of the objects use their own unique materials. Or, is this an undue concern and the renderer is suitability mature for this not to be a problem? The online documentation only mentions shared geometry, not entire shared Three.Mesh objects.
Using renderer.info as suggested, I was able to take a look at the draw calls, and I'm happy to report they were the same regardless of if you used a single shared MeshFaceMaterial, or individual ones per object. That's not to say there may be other performance penalties, and it does seem logical in clearly separate objects to keep things separate, but for this usage case where there is some crossover, it was not a problem.

Detecting Collisions Between Dynamic Objects in a 2D Environment

Let's say we have a lot of dynamic objects in a two-dimensional world, e.g. characters, projectiles, power-ups, the usual stuff you'd find in a game. All of them are moving. We want to detect collisions between them. What is a good way to do it?
I have looked at quad-trees, but it seems like to detect collisions between dynamically moving objects, I'd have to re-create quad-tree every frame (because objects change their positions with every frame). This looks like a costly operation.
Are there any other approaches to this problem besides quad-trees? Is there a way to improve the quad-tree approach? Maybe re-creating the tree on every frame is not so costly after all?
Normally, you update the quadtree (rather than throwing away the old one and building a new one), and this is not as expensive as you think: normally objects move only a small distance in each frame, so that the majority remain in the same node of the quadtree and there are few changes. Even in the worst case, where every item moves across a major boundary and has to be removed and reinserted, the cost is only O(n log n). But more or less any loop over all your items will cost about this much, so one more loop isn't such a big deal.

Proper Data Structure Choice for Collision System

I am looking to implement a 2D top-down collision system, and was hoping for some input as to the likely performance between a few different ideas. For reference I expect the number of moving collision objects to be in the dozens, and the static collision objects to be in the hundreds.
The first idea is border-line brute force (or maybe not so border-line). I would store two lists of collision objects in a collision system. One list would be dynamic objects, the other would include both dynamic and static objects (each dynamic would be in both lists). Each frame I would loop through the dynamic list and pass each object the larger list, so it could find anything it may run into. This will involve a lot of unnecessary calculations for any reasonably sized loaded area but I am using it as a sort of baseline because it would be very easy to implement.
The second idea is to have a single list of all collision objects, and a 2D array of either ints or floats representing the loaded area. Each element in the array would represent a physical location, and each object would have a size value. Each time an object moved, it would subtract its size value from its old location and add it to its new location. The objects would have to access elements in the array before they moved to make sure there was room in their new location, but that would be fairly simple to do. Besides the fact that I have a very public, very large array, I think it would perform fairly well. I could also implement with a boolean array, simply storing if a location is full or not, but I don't see any advantage to this over the numeric storage.
The third I idea I had was less well formed. A month or two ago I read about a two dimensional, rectangle based data structure (may have been a tree, i don't remember) that would be able to keep elements sorted by position. Then I would only have to pass the dynamic objects their small neighborhood of objects for update. I was wondering if anyone had any idea what this data structure might be, so I could look more into it, and if so, how the per-frame sorting of it would affect performance relative to the other methods.
Really I am just looking for ideas on how these would perform, and any pitfalls I am likely overlooking in any of these. I am not so much worried about the actual detection, as the most efficient way to make the objects talk to one another.
You're not talking about a lot of objects in this case. Honestly, you could probably brute force it and probably be fine for your application, even in mobile game development. With that in mind, I'd recommend you keep it simple but throw a bit of optimization on top for gravy. Spatial hashing with a reasonable cell size is the way I'd go here -- relatively reasonable memory use, decent speedup, and not that bad as far as complexity of implementation goes. More on that in a moment!
You haven't said what the representation of your objects is, but in any case you're likely going to end up with a typical "broad phase" and "narrow phase" (like a physics engine) -- the "broad phase" consisting of a false-positives "what could be intersecting?" query and the "narrow phase" brute forcing out the resulting potential intersections. Unless you're using things like binary space partitioning trees for polygonal shapes, you're not going to end up with a one-phase solution.
As mentioned above, for the broad phase I'd use spatial hashing. Basically, you establish a grid and mark down what's in touch with each grid. (It doesn't have to be perfect -- it could be what axis-aligned bounding boxes are in each grid, even.) Then, later you go through the relevant cells of the grid and check if everything in each relevant cell is actually intersecting with anything else in the cell.
Trick is, instead of having an array, either have a hash table for every cell grid. That way you're only taking up space for grids that actually have something in them. (This is not a substitution for badly sized grids -- you want your grid to be coarse enough to not have an object in a ridiculous amount of cells because that takes memory, but you want it to be fine enough to not have all objects in a few cells because that doesn't save much time.) Chances are by visual inspection, you'll be able to figure out what a good grid size is.
One additional step to spatial hashing... if you want to save memory, throw away the indices that you'd normally verify in a hash table. False positives only cost CPU time, and if you're hashing correctly, it's not going to turn out to be much, but it can save you a lot of memory.
So:
When you update objects, update which grids they're probably in. (Again, it's good enough to just use a bounding box -- e.g. a square or rectangle around the object.) Add the object to the hash table for each cell it's in. (E.g. If you're in cell 5,4, that hashes to the 17th entry of the hash table. Add it to that entry of the hash table and throw away the 5,4 data.) Then, to test collisions, go through the relevant cells in the hash table (e.g. the entire screen's worth of cells if that's what you're interested in) and see what objects inside of each cell collide with other objects inside of each cell.
Compared to the solutions above:
Note brute forcing, takes less time.
This has some commonality with the "2D array" method mentioned because, after all, we're imposing a "grid" (or 2D array) over the represented space, however we're doing it in a way less prone to accuracy errors (since it's only used for a broad-phase that is conservative). Additionally, the memory requirements are lessened by the zealous data reduction in hash tables.
kd, sphere, X, BSP, R, and other "TLA"-trees are almost always quite nontrivial to implement correctly and test and, even after all that effort, can end up being much slower that you'd expect. You don't need that sort of complexity for a few hundreds of objects normally.
Implementation note:
Each node in the spatial hash table will ultimately be a linked list. I recommend writing your own linked list with careful allocations. Each node need take up more than 8 bytes (if you're using C/C++) and should a pooled allocation scheme so you're almost never allocating or freeing memory. Relying on the built-in allocator will likely cripple performance.
First thing, I am but a noob, I am working my way through the 3dbuzz xna extreme 101 videos, and we are just now covering a system that uses static lists of each different type of object, when updating an object you only check against the list/s of things it is supposed to collide with.
So you only check enemy collisions against the player or the players bullets, not other enemys etc.
So there is a static list of each type of game object, then each gamenode has its own collision list(edit:a list of nodes) , that are only the types it can hit.
sorry if its not clear what i mean, i'm still finding my feet

Performance problem with rendering 1000 cubes in XNA 4.0 [duplicate]

I'm aware that the following is a vague question, but I'm hitting performance problems that I did not anticipate in XNA.
I have a low poly model (It has 18 faces and 14 vertices) that I'm trying to draw to the screen a (high!) number of times. I get over 60 FPS (on a decent machine) until I draw this model 5000+ times. Am I asking too much here? I'd very much like to double or triple that number (10-15k) at least.
My code for actually drawing the models is given below. I have tried to eliminate as much computation from the draw cycle as possible, is there more I can squeeze from it, or better alternatives all together?
Note: tile.Offset is computed once during initialisation, not every cycle.
foreach (var tile in Tiles)
{
var myModel = tile.Model;
Matrix[] transforms = new Matrix[myModel.Bones.Count];
myModel.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in myModel.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
// effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index]
* Matrix.CreateTranslation(tile.Offset);
effect.View = CameraManager.ViewMatrix;
effect.Projection = CameraManager.ProjectionMatrix;
}
mesh.Draw();
}
}
You're quite clearly hitting the batch limit. See this presentation and this answer and this answer for details. Put simply: there is a limit to how many draw calls you can submit to the GPU each second.
The batch limit is a CPU-based limit, so you'll probably see that your CPU gets pegged once you get to your 5000+ models. Worse still, when your game is doing other calculations, it will reduce the CPU time available to submit those batches.
(And it's important to note that, conversely, you are almost certainly not hitting GPU limits. No need to worry about mesh complexity yet.)
There are a number of ways to reduce your batch count. Frustrum culling is one. Probably the best one to persue in your case is Geometry Instancing, this lets you draw multiple models in a single batch. Here is an XNA sample that does this.
Better still, if it's static geometry, can you simply bake it all into one or a few big meshes?
As with any performance problem there are limits where a particular approach works. You need to measure and see where problems are. The best option is to use profiler but even basic measurements like looking at CPU load may show what bottlencks you have.
As a first investiagtion step I'd recommend to remove all computations (like matrix multiplications) and see you get improvments - this would mean that CPU is still doing more work than GPU.
Make sure you are not doing measurements on debug build - it could make application significantly slower if it is CPU bound.
Side note: GPU works the best when you send large operations relatively infrequently. Your code does more or less opposite - send huge number of very small drawing requests. You should be able to batch your primitives and get better performance. There are samples around how to render large number of simple objects (including ones in DirectX SDK), searching for "gpu rendering crowds" can give you starting point.

ActionScript 3 Optimization - improving performance with large #'s of objects

I'm devloping some library classes for flocking/steering behaviours on large numbers of objects (2000+). I'm finding that at < 500 instances, performance is reasonable. As the numbers increase, framerate bogs down.
I've seen remarkable performance with libraries such as Flint or Box2D with ridiculous #'s of particles / objects, so it should be possible to optimize / refactor my code to be a bit better.
I'm aware of the basic optimizations, such as bitwise operations and optimized for loops. Are there any more fundamental approaches I should be considering? For example, currently each instance is a vector-based MovieClip. Would working with BitmapData be more efficient?
forget about vectors.
cache them as bitmapdata and draw to a bitmap, or draw a bitmapfilled rect to graphics.
dont use vectors. find a way around it. be clever. bitmap lookup tables, caching, more lookup tables.
spend RAM on caching things for different orientations, views, frames, etc, rather than spending PROCESSOR on wasteful cpu cycles.

Resources