How to optimize scene loading in Unity3D - performance

I was wondering if there is any way to speed up level loading in unity3d. I currently have loading level between both scenes, but it yet takes around 5 sec to load new level on ipad3. That's quite a lot.
I've optimized all start and awake functions so there is realy little stuf going on there. However i have a lot of sprites in each scene and i think they took most of the loading time.
Could I somehow determine which objects needs to be load at start and which can load during first 10 secs of level? I tried adding level additive but that makes my game to lag for sec or two.
Any smart way of speeding it up?

The time it takes to load your level is determined by the things that are referenced in your scene. The only way to cut loading time is to remove things.
You mentioned it would be fine to load some things after the level has started. You can do this using Resources.Load. However, this will only benefit you if there are no references to the thing you are loading. For example, if you have 100 trees in your scene it won't do you any good reducing that to 10. The tree is still referenced and will have to be loaded before your level can start. If you eliminate all of them then your level can start without loading the tree. It is then up to you to load it using Resources.Load and start planting them, maybe over the course of several frames to prevent a hickup.

Related

Instantiating multiple objects in one frame or one object per frame?

The idea is simple 'instantiating a map' in Awake with random values.
But the Question is:
Should i instantiate the whole map in one frame (using Loop)?
or better to instantiate each object per frame?
Because i don't want to ruin player's ram by instantiating 300 gameobjects in less than a second.
Whether you instantiate all gameobjects in one frame or not, they will always end up in the RAM the same way. The only way to "ruin" someone's ram would be to instantiate so many gameobjects til there is no memory left. Considering that a typical prefab in unity is only a few kb in size and a typical RAM nowadays is a few GB in size , that would take roughly a million gameobjects.
Never ever make things that depends on frames, never!!
There are some exceptions when this can be good but most of the time its not.
Good case:
- Incremental garbage collection (still has drawbacks)
Bad case:
- Your case, loading a map should be at the beginning
Why i should not make my game frame dependent?
Because, PCs have different computational speed, a good example was Harry Potter II, the game was developed for machines capable of 30 frames per seconds, modern machines can run that game extremely fast thus the game is basically sped up, you need to manually throttle the CPU to make the game playable.
Another example is unitys delta time, the reason you use it when moving objects over multiple frames because it takes into account the last frames computation speed
Also 300 objects is nothing when loading a game, also from a player point of view:
What is better 10 seconds loading, or 30 seconds 15 fps then normal speed
(above example is exaggerated tho)
When loading a map you can do it asynchronously at the start of entering the scene. This way you can put a loadingscreen during the loading time. This is a good way to do it if you are making a single player game. If it's a multiplayer game you need to sync it on the server for every other player aswell. The method for loading a scene async is SceneManager.LoadSceneAsync().
If you're trying to instantiate objects during runtime because you want to randomize certain objects I would recommend loading every object that doesn't need randomizing by entering the scene (so dropping them in the scene).
This is how I interpreted your question tell me if I am wrong.

Browser getting more responsive after a while on heavy web page

When we load in a very heavy web page with a huge html form and lots of event handler code on it, the page gets very laggy for some time, responding to any user interaction (like changing input values) with a 1-2 second delay.
Interestingly, after a while (depending on the size of the page and code to parse, but around 1-2 minutes) it gets as snappy as it normally is with average size web pages. We tried to use the profiler in the dev tools to see what could be running in the background but nothing surprising is happening.
No network traffic is taking place after the page load, neither is there any blocking code running and HTML parsing is long gone at the time according to the profiler.
My questions are:
do browsers do any kind of indexing on the DOM in the background to speed up queries of elements?
any other type of optimization like caching repeated function call results?
what causes it to "catch up" after a while?
Note: it is obvious that our frontend is quite outdated and inefficient but we'd like to squeeze out everything from it before a big rewrite.
Yes, modern browsers, namely modern Javascript runtimes performs many optimisations during load and more importantly during page lifecycle: one of them is "Lazy / Just In Time Compilation, what in general means that runtime observes demanding or frequently performed patterns and translates them to faster, "closer to metal" format. Often in cost of higher memory consumption. Amusing fact is that such optimisations often makes "seemingly ugly and bad but predictable" code faster than well-thought complex "hand-crafted optimised" one.
But Iʼm not completely sure this is the main cause of phenomenon you are describing. Initial slowness and unresponsiveness is more often caused by battle of network requests, blocking code, HTML and CSS parsing and CPU/GPU rendering, i.e. wire/cache->memory->cpu/gpu loop, which is not that dependant on Javascript optimisations mentioned before.
Further reading:
http://creativejs.com/2013/06/the-race-for-speed-part-3-javascript-compiler-strategies/
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool

Over time Application built in FlashBuilder runs slower and slower

I have an application built in FlashBuilder written in actionscript that when I launch the application its very responsive and smooth running. However, over time and after interacting with some sliders in the application the program slowly becomes more and more unresponsive and less smooth. For instance, when changing the slider's value from 0 to 100 it will update the slider with many values between the end value 100. However, after the application has been running for a while when taking the same action of 0 to 100 and moving the slider at the same rate from 0 to 100 I might only get a handful of values instead of maybe 50.
Does anyone know why this is happening and what I should check to reduce this leakage of performance?
It sounds like you most definitely have a memory leak in your application. If you are using Flash Builder/Flex Builder, you can use the Profiler tool to find out when exactly the memory usage increases and what objects are not being garbage collected.
Just make sure that you are not creating new instances every time the slider moves. Also remember that event listeners should be removed as soon as they are no longer needed.
This bit of documentation was rather helpful to me when I had a similar problem:
http://livedocs.adobe.com/flex/3/html/help.html?content=profiler_7.html

Is there any way to force JavaFX to release video memory?

I'm writing an application leveraging JavaFX that scrolls a large amount of image content on and off of the screen every 20-30 seconds. It's meant to be able to run for multiple hours, pulling in completely new content and discarding old content every couple minutes. I have 512Mb of graphics memory on my system and after several minutes, all of that memory has been consumed by JavaFX and no matter what I do with my JavaFX scene, none of it is released. I've been very careful to discard nodes when they drop off of the scene, and at most I have 50-60 image nodes in memory at one time. I really need to be able to do a hard release of the graphics memory that was backing these images, but haven't been able to figure out how to accomplish that, as the Image interface in JavaFX seems to be very high level. JavaFX will continue to run fine, but other graphics heavy applications will fail to load due to limited resources.
I'm looking for something like the flush() method on java.awt.image.Image:
http://docs.oracle.com/javase/7/docs/api/java/awt/Image.html#flush()
I'm running java 7u13 on Linux.
EDIT:
I managed to work out a potential workaround ( see below ), but have also entered a JavaFX JIRA ticket to request the functionality described above:
RT-28661
Add explicit access to a native resource cleanup function on nodes.
The best workaround that I could come up with was to set my JVM's max heap to half of the available limit of my graphics card. ( I have 512mb of graphics memory, so I set this to -Xmx256m ) This forces the GC to be more proactive in cleaning up my discarded javafx.image.Image objects, which in turn seems to trigger graphics memory cleanup on the part of JavaFX.
Previously my heap space was set to 512mb, ( I have 4gb of system memory, so this is a very manageable limit ). The problem with that seems to be that the JVM was being very lazy about cleaning up my images until it started approaching this 512mb limit. Since all of my image data was copied into graphics memory, this meant I had most likely exhausted my graphics memory before the JVM really started really caring about cleanup.
I did try some of the suggestions by jewelsea:
I am calling setCache(false), so this may be having a positive affect, but I didn't notice an improvement until I dropped my max heap size.
I tried running with Java8 with some positive results. It did seem to behave better in graphics memory management, but it still ate up all of my memory and didn't seem to start caring about graphics memory until I was almost out. If reducing your the application's heap limit is not feasible, then evaluating the Java8 pre-release may be worthwhile.
I will be posting some feature requests to the JavaFX project and will provide links to the JIRA tickets.
Perhaps you are encountering behaviour related to the root cause of the following issue:
RT-16011 Need mechanism for PG nodes to know when they are no longer part of a scene graph
From the issue description:
Some PG nodes contain handles to non-heap resources, such as GPU textures, which we would want to aggressively reclaim when the node is no longer part of a scene graph. Unfortunately, there is no mechanism to report this state change to them so that they can release their resources so we must rely on a combination of GC, Ref queues, and sometimes finalization to reclaim the resources. Lazy reclamation of some of these resources can result in exceptions when garbage collection gets behind and we run out of these limited resources.
There are numerous other related issues you can see when you look at the issue page I linked (signup is required to view the issue, but anybody can signup).
A sample related issue is:
RT-15516 image data associated with cached nodes that are removed from a scene are not aggressively released
On which a user commented:
I found a workaround for my app just settihg up an using of cashe to false for all frequently using nodes. 2 days working without any crashes.
So try calling setCache(false) on your nodes.
Also try using a Java 8 preview release where some of these issues have been fixed and see if it increases the stability of your application. Though currently, even in the Java 8 branch, there are still open issues such as the following:
RT-25323 Need a unified Texture resource management system for Prism
Currently texture resources are managed separately in at least 2 places depending on how it is used; one is a texture cache for images and the other is the ImagePool for RTTs. This approach is flawed in its design, i.e. the 2 caches are unaware of each other and it assumes system has unlimited native resources.
Using a video card with more memory may either reduce or eliminate the issue.
You may also wish to put together a minimal executable example which demonstrates your issue and raise a bug request against the JavaFX Runtime project so that a JavaFX developer can investigate your scenario and see if it is new or a duplicate of a known issue.

Qt 4.6.x under MacOS/X: widget update performance mystery

I'm working on a Qt-based MacOS/X audio metering application, which contains audio-metering widgets (potentially a lot of them), each of which is supposed to be updated every 50ms (i.e. at 20Hz).
The program works, but when lots of meters are being updated at once, it uses up lots of CPU time and can bog down (spinny-color-wheel, oh no!).
The strange thing is this: Originally this app would just call update() on the meter widget whenever the meter value changed, and therefore the entire meter-widget would be redrawn every 50ms. However, I thought I'd be clever and compute just the area of the meter that actually needs to be redrawn, and only redraw that portion of the widget (e.g. update(x,y,w,h), where y and h are computed based on the old and new values of the meter). However, when I implemented that, it actually made CPU usage four times higher(!)... even though the app was drawing 50% fewer pixels per second.
Can anyone explain why this optimization actually turns out to be a pessimization? I've posted a trivial example application that demonstrates the effect, here:
http://www.lcscanada.com/jaf/meter_test.zip
When I compile (qmake;make) the above app and run it like this:
$ ./meter.app/Contents/MacOS/meter 72
Meter: Using numMeters=72 (partial updates ENABLED)
... top shows the process using ~50% CPU.
When I disable the clever-partial-updates logic, by running it like this:
$ ./meter.app/Contents/MacOS/meter 72 disable_partial_updates
Meter: Using numMeters=72 (partial updates DISABLED)
... top shows the process using only ~12% CPU. Huh? Shouldn't this case take more CPU, not less?
I tried profiling the app using Shark, but the results didn't mean much to me. FWIW, I'm running Snow Leopard on an 8-core Xeon Mac Pro.
GPU drawing is a lot faster then letting CPU caclulate the part to redraw (at least for OpenGL this takes in account, I got the Book OpenGL superbible, and it states that OpenGL is build to redraw not, to draw delta as this is potentially a lot more work to do). Even if you use Software Rendering, the libraries are higly optimzed to do their job properly and fast. So Just redrawing is state of art.
FWIW top on my Linux box shows ~10-11% without partial updates and 12% using partial updates. I had to request 400 meters to get that CPU usage though.
Perhaps it's just that the overhead of Qt setting up a paint region actually dwarfs your paint time? After all your painting is really simple, it's just two rectangular fills.

Resources