unity3d - large file with animations - animation

I have a project where a large truck model is used (around 500MB), very detailed cad model is converted to fbx format in 3dsmax.
There are animations in the fbx model too.
Question 1:
Should I have all the animations as clips in the same fbx file or have separate animation files.
As having seperate fbx files for each animation will increase the overall app size.
Question 2:
How to optimize the mesh which is around 500MB with lots/plenty of child objects (as is it is a very detailed mesh) for performance. Will culling reduce draw-calls or combining mesh reduce draw-call. Is there a way to reduce tris/polycount in the mesh for optimization.

If you really care about the app size - go for the same file. Otherwise it's generally much more convenient to have separate .fbx files for animations.
You can try limited dissolving and decimation in Blender to reduce the vertices/poly count.

Related

GLTF anim and morph playback issue when using three.js with more than one mixer

I have a single gltf file, exported from Blender with 6 anims and 20 morph targets. When that's the only skinned gltf object in the scene, everything plays nicely - I can switch between bone anims (run, walk, idle, etc), and get all morph anims (for facial expressions) cycling on a timer, or triggered by events. Yay.
The problem is when I introduce a second skinned object, such as an NPC. At that point lots of weirdness starts to happen.
For example, when morph targets cycle expressions on/off the player object, the NPC model standing nearby scales down and disappears on the off cycle, then scales back up during the on cycle. Another example, at init time the NPC object might randomly turn into an instance of another loaded object (a tree or a building), or occasionally a mini version of some random object, at 10% normal scale, and then start rapidly bouncing around in unpredictable and inconsistent ways. I have no idea what's going on.
I thought it might have something to do with loading multiple mixers, but then that's what the docs state should be done - "When multiple objects in the scene are animated independently, one AnimationMixer may be used for each object." Unless I'm reading that wrong?
I'm using:
npcMixer = new THREE.AnimationMixer(npc);
virtually the same as what I do for the player:
playerMixer = new THREE.AnimationMixer(player);
Is this a bad/mistaken approach?
Perhaps worth noting: I had FBX versions of the player and NPC working just fine together when exported and accessed as individual files. Then I spent a lot of time converting to GLTF since it's faster and lets me wrap all the actions up in a single file, which the FBX exporter does not seem to support (If I'm wrong about FBX being able to export multiple actions in a single file for playback in the three.js context please let me know!).
Three.js r98
Blender 2.79
Thanks for any advice.

Dividing a sphere into multiple texture

I have a sphere with texture of earth that I generate on the fly with the canvas element from an SVG file and manipulate it.
The texture size is 16384x8192 , and less than this - it's look blurry on close zoom.
But this is a huge texture size and causing memory problems... (But it's look very good when it is working)
I think a better approach would be to split the sphere into 32 separated textures, each in size of 2048x2048
A few questions:
How can I split the sphere and assign the right textures?
Is this approach better in terms of memory and performance from a single huge texture?
Is there a better solution?
Thanks
You could subdivide a cube, and cubemap this.
Instead of having one texture per face, you would have NxN textures. 32 doesn't sound like a good number, but 24 for example does, (6x2x2).
You will still use the same amount of memory. If the shape actually needs to be spherical you can further subdivide the segments and normalize the entire shape (spherify it).
You probably cant even use such a big texture anyway.
notice the top sphere (cubemap, ignore isocube):
Typically, that's not something you'd do programmatically, but in a 3D program like Blender or 3D max. It involves some trivial mesh separation, UV mapping and material assignment. One other approach that's worth experimenting with would be to have multiple materials but only one mesh - you'd still get (somewhat) progressive loading. BUT
Are you sure you'd be better off with "chunks" loading sequentially rather than one big texture taking a huge amount of time? Sure, it'll improve a bit in terms of timeouts and caching, but the tradeoff is having big chunks of your mesh be textureless, which is noticeable and unasthetic.
There are a few approaches that would mitigate your problem. First, it's important to understand that texture loading optimization techniques - while common in game engines - aren't really part of threejs or what it's built for. You'll never get the near-seamless LODs or GPU optimization techniques that you'll get with UE4 or Unity. Furthermore webGL - while having made many strides over the past decade - is not ideal for handling vast texture sizes, not at the GPU level (since it's based on OpenGL ES, suited primarily for mobile devices) and certainly not at the caching level - we're still dealing with broswers here. You won't find a lot of webGL work done with vast textures of the dimensions you refer to.
Having said that,
A. A loader will let you do other things while your textures are loading so your user isn't staring at an 'unfinished mesh'. It lets you be pretty clever with dynamic loading times and UX design. Additionally, take a look at this gist to give you an idea for what a progressive texture loader could look like. A much more involved technique, that's JPEG specific, can be found here but I wouldn't approach it unless you're comfortable with low-level graphics programming.
B. Threejs does have a basic implementation of LOD although I haven't tinkered with it myself and am not sure it's useful for textures; that said, the basic premise to inquire into is whether you can load progressively higher-resolution files on a per-need basis, just like Google Earth does it for example.
C. This is out of the scope of your question - but I'd look into what happens under the hood in Unity's webgl export (which is based on threejs), and what kind of clever tricks are being employed there for similar purposes.
Finally, does your project have to be in webgl? For something ambitious and demanding, sometimes "proper" openGL / DX makes much more sense.

Animation and Instancing performances

talking about the storage and loading of models and animations, which would be better for a Game Engine:
1 - Have a mesh and a bone for each model, both in the same file, each bone system with 10~15 animations. (so each model has its own animations)
2 - Have alot of meshes and a low number of bones, but the files are separated from each other and the same bone (animations too) can be used for more then one mesh, each bone set can have alot of animations. (notice that in this case, using the same boneset and the same animations will cause a loss of uniqueness).
And now, if I need to show 120~150 models in each frame (animated and skinned by the GPU), 40 of them are the same type, is better:
1 - Use a instancing system for all models in the game, even if I only need 1 model for each type.
2 - Detect wich model need instancing (if they repeat more then one time) and use a diferent render system (other shader programs), use a non-instancing for the other models.
3 - Dont use instancing because the "gain" would be very low for this number of models.
All the "models" talked here are animated models, currently I use the MD5 file with GPU skinning but without instancing, and I would know if there are better ways to do all the process of animating.
If someone know a good tutorial or can put me on the way... I dont know how I could create a interpolated skeleton and use instancing for it, let me explain..:
I can compress all the bone transformations (matrices) for all animation for all frames in a simple texture and send it to the vertex shader, then read for each vertex for each model the repective animation/frame transformation. This is ok, I can use instancing here because I will always send the same data for the same model type, but, when I need to use a interpolate skeleton, should I do this interpolation on vertex shader too? (more loads from the texture could cause some lost of performance).
I would need calculate the interpolated skeleton on the CPU too anyway, because I need it for colision...
Any solutions/ideas?
Im using directX but I think this applies to other systems
=> Now I just need an answer for the first question, the second is solved (but if anyone wants to give any other suggestions its ok)
The best example I can think of and one I have personally used is one by NVidia called Skinned Instancing. The example describes a way to render many instances of the same bone mesh. There is code and a whitepaper available too :)
Skinned Instancing by NVidia

Overall strategy for storing animated meshes

I've been trying to figure out how you'd take a mesh generated in a program like 3ds max and bring that into your game with animations, textures, etc.
I've looked at FBX and Collada, but from what I've read, they're used as an intermediate step between the modelling software and some final format that may be custom to the game. What I'm looking for is a book or tutorial that would go over in a general way what you would store in your custom file, how you would store animation data, etc.
Right now I don't really have a general plan of attack and all of the guides I've seen stick to rendering a few triangles.
It doesn't have to be implementation specific to OpenGL, although that is what I'll be using.
Yes Collada is an interchange format.
What that means is it is very much generic. And if I am right that is exactly what you are looking for!
You can use a library such as Assimp to load collada into a generic scene graph, and then have your game/renderer use it directly, or preprocess and then consume it.

DirectX9 - Efficiently Drawing Sprites

I'm trying to create a platformer game, and I am taking various sprite blocks, and piecing them together in order to draw the level. This requires drawing a large number of sprites on the screen every single frame. A good computer has no problem handling drawing all the sprites, but it starts to impact performance on older computers. Since this is NOT a big game, I want it to be able to run on almost any computer. Right now, I am using the following DirectX function to draw my sprites:
D3DXVECTOR3 center(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 position(static_cast<float>(x), static_cast<float>(y), z);
(my LPD3DXSPRITE object)->Draw((sprite texture pointer), NULL, &center, &position, D3DCOLOR_ARGB(a, r, g, b));
Is there a more efficient way to draw these pictures on the screen? Is there a way that I can use less complex picture files (I'm using regular png's right now) to speed things up?
To sum it up: What is the most performance friendly way to draw sprites in DirectX? thanks!
The ID3DXSPRITE interface you are using is already pretty efficient. Make sure all your sprite draw calls happen in one batch if possible between the sprite begin and end calls. This allows the sprite interface to arrange the draws in the most efficient way.
For extra performance you can load multiple smaller textures in to one larger texture and use texture coordinates to get them out. This makes it so textures don't have to be swapped as frequently. See:
http://nexe.gamedev.net/directknowledge/default.asp?p=ID3DXSprite
The file type you are using for the textures does not matter as long as they are are preloaded into textures. Make sure you load them all in to textures once when the game/level is loading. Once you have loaded them in to textures it does not matter what format they were originally in.
If you still are not getting the performance you want, try using PIX to profile your application and find where the bottlenecks really are.
Edit:
This is too long to fit in a comment, so I will edit this post.
When I say swapping textures I mean binding them to a texture stage with SetTexture. Each time SetTexture is called there is a small performance hit as it changes the state of the texture stage. Normally this delay is fairly small, but can be bad if DirectX has to pull the texture from system memory to video memory.
ID3DXsprite will reorder the draws that are between begin and end calls for you. This means SetTexture will typically only be called once for each texture regardless of the order you draw them in.
It is often worth loading small textures into a large one. For example if it were possible to fit all small textures in to one large one, then the texture stage could just stay bound to that texture for all draws. Normally this will give a noticeable improvement, but testing is the only way to know for sure how much it will help. It would look terrible, but you could just throw in any large texture and pretend it is the combined one to test what performance difference there would be.
I agree with dschaeffer, but would like to add that if you are using a large number different textures, it may better to smush them together on a single (or few) larger textures and adjust the texture coordinates for different sprites accordingly. Texturing state changes cost a lot and this may speed things up on older systems.

Resources