(Unity) Having bad performance of Animator - performance

I am currently working on a 2D Jump and Run game and do a bit of performance testing and optimizations. Ther I came across a big problem. When I load 2500 enemies in my game the script and everything runs smoothly and do not take long, but the animators from those enemies take up round about 33ms and I do not think that this is the way it should behave, because only calling events from animators should not take that long.
Unity version: 2018.1.0f2
I hope somebody can help me with this problem...
Image of Performance Profile

Related

Unity - How to get started working on a large terrain/map?

I want to begin working on a big sandbox game with lots of oceans and islands. This will obviously require a very big map.
I've spent the last couple days researching the best methods on doing this and so far have figured out that
I need to split the terrain into tiles/chunks.
I need to have the player only load those tiles/chunks that are closest to them and unload those further away.
That being said. I would rather avoid flooding my game with a bunch of assets and want to kind of handle things on my own and hard code these systems in myself.
Some questions,
Can I just create a massive 20k x 20k terrain using Unity's built in terrain editing system and then worry about splicing and loading/unloading the tiles later?
or do I need to build my big terrain in an alternative program and import my terrain in then handle the splicing and loading situation?
Also, when it comes to multiplayer. I assume I would just need to basically do the same thing just for each client?
I would appreciate any other tips or guidance on doing this as well. Thanks.
Splitting into multiple chunks but ones that arent too small seems to be the better option, maybe like 4 quarters, that way you are not flooding with assets, you would disable all quarters except the one the player is in.
The problem with making a big terrain is, while you could get a tool from the asset store to split ur terrain later, you will get into the headache of resolutions and sizes and overall I guess its better to split it into 4 pieces or a bit.
You would have a trigger point where you want the game to load the terrain, and depending on which trigger point the player entered you would load the corresponding terrain piece.
For multiplayer, you would do the same thing for each client, making your big terrain as 4 smaller terrains and then loading the corresponding quarter. Tell me if I missed something.

Three.js: What's the upper limit for holding 60 FPS on an average desktop?

I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.
I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.
With these figures in mind, is this the expected behavior for three.js rendering?
I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.
But just to give you some numbers first,
if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.
just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.
However, these kinds of numbers are not really helpful.
Some hints:
if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).
generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.
the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.
if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)
another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)

threejs benchmark and progressive enhancement

I am loading a ThreeJS scene on a website and I would like to optimize it depending on the capacity of the graphic card.
Is there a way to quickly benchmark the client computer and have some data that will let me decide how demanding or simple has to be my scene in order to run at a decent FPS ?
I am thinking of a benchmark library that can be easily plugged or a benchmark-as-a-service. And it has to run without the user noticing.
you can use stats.js to monitor performance. it is used in almost all threejs examples and is inluded in the treejs base.
the problem with this is that the framerate is locked to 60fps, so you cant tell how much ms get lost by vsynch
the only thing i found to be reliable is take the render time and increase quality if its under a limit, decrease it if it takes too long

How to quickly create hundreds of biped animations?

I am a video game programmer working on building my own video game. I've decided that in order to build my game, I am going to need a large amount of animation files from 3DS Max.
My question is, what is the best approach to building a huge number of animation files? I'm looking to create 20 movement animations + 4 fighting styles * 18 attack types + 8 shooting animations + 10-20 magic casting animations for an estimated total of 128-138 animations (and probably more that I can't think of now).
I'm personally only planning on creating a small number of these animations myself, but I am trying to design the best workflow for creating a huge number of animations so that once I decide to create these animations, it is a feasible task.
I am familiar with how to create animations manually in 3ds max, but this approach seems slow, and would seem to take too many manhours to complete. I am vaguely familiar with motion capture, but I don't know any approaches for this or tutorials, and I don't know if this would work out at that scale.
Should be only few suggestions to make many animations quickly in low budget:
Avoid 3ds Max bones, use Biped system with Skin modifier, so you don't have to spend much time creating the rig.
Plan your game design adjusted to your possibilities: I mean, simple character models, without complex effects like hair, clothes and face expression morphs.
Since motion capture is expensive you can use reference videos inside your scene putting them in a plane's texture to help you creating animation keys.
Use MaxScript to solve repeating task. MaxScript is easy to learn. And there is lot of free plugins at: http://www.scriptspot.com/
There is lot of work involved you can't avoid if you want to create original content, unless you choose the expensive way:
The really fast quick approach is to use a service like: http://www.mixamo.com/
There you upload your model, auto-rig it and apply animation in less than 3 minutes each one. They have a database of motion captures and also provide custom motions.

Tips for optimizing performance of -webkit-transform?

I'm using webkit-transform: translate3d and a few other properties pretty extensively on a mobile app for iPhone because its hardware accelerated. With about 98% of the features in place, performance is great. I'm aware of not trying to do too much at once.
I'm successfully simulating swiping in a very excellent, native way. What I've noticed now is that when I add the last 2% of features I'm seeing some image redrawing issues in the that is being animated while swiping. After you swipe through all 4 images and they load, then performance is perfectly smooth again. However, when this section is hidden and shown, the same thing happens.
What I hypothesize is happening is there's an internal buffer being hit and it has to reload each time.
So this with that background, the general question is what kinds of performance optimizations have other developers been making for -webkit-transform? I'm not necessarily asking about my particular situation, but rather what wider range of optimizations have people figured out for their individual needs?
Hopefully if this question gets some answers, it can be a resource for other folks asking the same question down the road.
It's a fairly well known thing, but making sure the element you transform is using 3d transforms where possible helps a lot on devices that hardware acccelerate transforms (iOS at the moment).
The easiest way to do that is to add:
transform: translate3d(0,0,0);
with the appropriate prefixes to the css of the element in question, then just animate it as normal, either by using 2d or 3d transforms.
It might sound a bit weird but i had a similar issue and i solved it by using -webkit-perspective: 1000.
Don't know how this acts in favor of the transitions, but in my case it did.

Resources