Trying to find proper profiler to test consumption after build on unity - performance

I have problem on my game after I build it and install on my device it's lagging and has low fps(I tested on different devices and everywhere is the same). I tried unity's built in profiler which shows that everything is fine and always displays 100(or more) fps. So I think profiling game after installation can help me, but I can't find any proper profiler to use and can someone give me any suggestion?
Thanks in advance

Unity's profiler is still a valid tool. You can find the slow parts in your scripts.
The main difference on mobile devices are bad graphic cards.
To have good performance on those cards you need to bring down the polygon count and number of draw calls.
You find those infos in the stats window of the Game tab.
Also mobile shaders and baked lighting helps.
Find more hints in Unity's Mobile Optimization Guide.

Look at Graphy plugin. May be it will be useful in your case.
https://assetstore.unity.com/packages/tools/gui/graphy-ultimate-fps-counter-stats-monitor-debugger-105778

Related

General tips in running a Unity3D-Animation on Microsoft Hololens

I'm going to have the task to make sure that an animation created for in Unity3D can be run on a Microsoft Hololens. I don't have any further information about the animation yet but I wanted to ask in advance if there are any big things i should keep in mind.
In the animation you're playing a "character" in first person mode, controlled by wasd or the arrow keys and you can look up, down, left, right with the mouse. There are (as known to me) no special interactions besides colliders.
And another question: is it easier to test the animation on the actual hololens or to use a hololens-emulator on my laptop?
I know it's a lot to ask right now without any code or stuff but I still hope that some of you can give me a little advide :)
In my experience it is difficult to say. The HoloLens, besides it is an awesome device with nice specs for that size, has quite limited graphical power. Try to minimize your model's vetices to a reasonable low amount (e.g. using Blender's decimate feature). Set down the quality in Unity's quality setting as proposed in the Dev-Guide.
For your emulation question: The emulator does not emulate the HoloLens' specs (like processor, memory...), but emulates input concepts etc., while running a Hyper-V virtual machine. So the performance in the emulator is dependent to your computer's hardware and is not related to the actual performance on a HoloLens.
Also take a look at the performance guidelines from Microsoft
I worked on HoloLens for a couple of projects. A few points that can be useful for you:
the first big thing I would keep in my mind is understanding if the character has to move in a VR environment. In this case HoloLens is almost useless because its lenses will allow you to see the surroundings [the real ones] distracting you from the virtual world. This is exactly what happens with their pre-installed HoloTour. Nice attempt but you will not totally feel in Roma or Machu Picchu
the second big thing that I would consider is the fact that - at least for the first release - HoloLens has a very limited field of view, that "amounts to the size of a monitor in front of you – equivalent to 15 inches" [source]. It is likely that - in a situation where the character will look in every direction - the objects that you put in the AR space will end up being cut or invisible
about testing: the emulator is really exceptional, I didn't find great differences between it and the real device. Of course if you already have the real HoloLens I would use that. But if not I would first develop and test on the emulator to understand if the project is worth the purchase

AS3 Greensock Chugs in Browser

I'm building an ad in Flash that has a lot of looping MCs playing at once. When I run a greensock command, it runs fine in Flash, but in the browser it only plays the last frame of the tween. Without a lot of trial and error, is there a way to know what are the first things I should try to improve performance?
If I were you I'd use a profiler to try and see A good tool that you could use to try and see where the problem is Adobe Scout. It will help you identify any bottlenecks or where in the code you may have a problem.
Certain IDEs feature profilers as well including FlashDevelop (free) and Flash Builder ($$$).

Dealing with OpenGL ES 2.0 driver bugs

I'm currently porting a 3D C++ game from iOS to Android using NDK. The rendering is done with GLES2. When I finished rewriting all the platform specific stuff and ran the full game for the first time I noticed rendering bugs - sometimes only parts of the geometry would render, sometimes huge triangles would flicker across the screen, and so on and so on...
I tested it on a Galaxy Nexus running 4.1.2. glGetError() returned nothing. Also, the game ran beautifully on all iOS devices. I started suspecting a driver bug and after hunting for many hours I found out that using VAOs (GL_OES_vertex_array_object) caused the trouble. The same renderer worked fine without VAOs and produced rubbish with VAOs.
I found this bug report at Google Code. Also I saw the same report at IMG forums and a staff member confirmed that it's indeed a driver bug.
All this made me think - how do I handle cases of confirmed driver bugs? I see 2 options:
Not using VAOs on Android devices.
Blacklisting specific devices and driver revisions, and not using VAOs on these devices.
I don't like both options.
Option number 1 will punish all users who have a good driver. VAOs really boost performance and I think it's a really bad thing to ignore them because one device has a bug.
Option number 2 is pretty hard to do right. I can't test every Android device for broken drivers and I expect the list to constantly change, making it hard to keep up.
Any suggestions? Is there perhaps a way to detect such driver bugs at runtime without testing every device manually?
Bugs in OpenGL ES drivers on Android is a well-known thing, so it is entirely possible to have a bug in a driver. Especially if you are using some advanced (not-so-well-tested) features like GL extensions.
In a large Android project we usually fight this issues using the following checklist:
Test and debug our own code thoroughly and check against OpenGL specifications to make sure we are not doing any API-misuses.
Google for the problem (!!!)
Contact the chip-set vendor (usually they have a form on their website to submit bugs from developers, but once you have submitted 2-3 real bugs successfully you will know the direct emails of people who can help) and show them your code. Sometimes they find bugs in the driver, sometimes they find API-misuse...
If the feature doesn't work on a couple of devices, just create a workaround or fallback to a traditional rendering path.
If the feature is not supported by the majority of the top-notch devices - just don't use it, you will be able to add it later once the market is ready for it.

unity3d and webgl comparison in terms of performance and speed

I am gonna develop a lesson in two platforms(firstly in webgl and then a similar lesson in unity 3d).
the aim of this research is to see the best of these platforms in terms of performance and speed to use it in e-learning environments.
my question is this :
how can i measure the performance (processor, memory, graphic card) for these platforms?
also, I am very appreciated if any one give me ideas or a suggestions to improve this research.
WebGL and Unity are not platforms. Unity is a library that has support for multiple platforms; its performance depends on what hardware its running on. WebGL is a JavaScript API for browsers that allow them to access OpenGL ES 2.0. This also isn't a platform; it is utterly dependent on the hardware it is running on.
Sure, each incurs overhead, but they also do completely different things. Even if one is seen as faster for a particular piece of hardware, that doesn't mean that you can use it. Unity makes applications. Something you download and install. WebGL is for web pages: HTML+JavaScript. The reasons to use one are not the same reasons you would have to use the other.
Making a "WebApp" is very different from making a regular application. You generally decide first off whether you want to make a WebApp or a regular application, then use the tools that are available to the one you pick.
There are platforms that don't support WebGL. Namely, Internet Explorer. Microsoft has already stated that they aren't going to implement WebGL. So WebGL's performance on IE is effectively 0.
Also, WebGL is a low-level rendering API; Unity is a game engine. Unity provides more functionality towards making a game than WebGL, so there are productivity differences you must take into account.
Your desire to compare the performance of these simply is not the most useful criteria for deciding which one to use.
OK, your later answer clued me in to the idea that you're focusing on browser-based tools.
WebGL is not available on Internet Explorer. So again, half of your customer base is gone. However, Unity's browser plug-in is a plug-in and therefore must be downloaded by the user. Quite a few users are against that. Also, Unity's browser plug-in doesn't work on mobile systems; you would be expected to write an app for those.
So which matters more to you: reaching out to mobile users (where WebGL is available), or reaching out to Internet Explorer users? Again, this is something you need to deal with long before you answer questions of performance.

Periodic GPU performance problem

I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts:
It happens on my machine only, other machines works fine
Removing everything from my render loop does not improve the problem
In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present.
Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop.
PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing?
Update: since I usually trust my code to be perfect (who's laughing?) I started a new XNA project from scratch to see if it exhibit the same behavior. So starting a new XNA 3.1 Windows Game project and running PIX I get this timeline. The same periodic pauses. So the problem must be lower in the stack, in XNA or Direct3D.
So PIX shows that the GPU is working on something, I can see the list of DX calls made within each frame and the timing calculations shows that the pause occurs during (or after) the IDirect3DDevice9::Present call.
Update 2: I had previously installed and uninstalled XNA 4.0 CTP on the problematic machine. I cannot be certain that this is related but I thought that perhaps a reinstall of the XNA Game Studio 3.1 bits could make a difference. Turns out it did.
The underlying question remains the same (and the bounty is still up): what could affect XNA 3.1 (or DirectX) to make it behave like this and is there any logging/tracing power tool for the DirectX and/or GPU level out there that could shed some light on what is going on?
Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM.
Note2: also posted this question on the XNA Creators forums here.
You could try to see if you can find something with Xperf that is close to your periodically problem, do not run your application but keep the programs open that would normally run besides your application. You could also try to do it again with the application running but it could give a cluttered view.
Start the tracing, do this in an elevated prompt.
xperf -on BASE+LATENCY -stackWalk Profile
Wait for a fair amount of time to be sure that the problem is traced.
Stop the tracing and open it like this.
xperf -d trace.etl
xperfview trace.etl
Analyze by looking at the graphs and consulting tables of specific intervals and see if you can find something that is related to the problem, the highest chance on finding it would be in the DPC and Interrupts section. But it might as well be something odd at the CPU or I/O section. Good luck!
Also more information on Xperf and how to obtain it, hopefully this delivers results.
If not, you can alternatively try GPUView which has been used for improvements in DWM,
this is also included next to Xperf with the Windows Performance Toolkit so you can easily try both!
log v
... wait for a fair amount of time to be sure that the problem is traced ...
log
gpuview merged.etl
In the case that gpuview gets out of memory you can try to add "/limit 3" or remove the v.
Read the documentation of the tools if you are stuck somewhere.
Hmm ... this seems to be occurring on the GPU, however it sounds like a CPU garbage collection issue. Can you run the CLR profiler and see if you can see any spikes in GC activity that you can correlate to the slowdowns?
I agree that it sounds unlikely since you can clearly see it in PIX, but it might offer a clue as to the cause.
If it's only happening on your own machine, then could it be drivers? Forgive me for being skeptical, but it's a 64 bit machine after all :D
This looks like either a vsync issue or GPU in its last throes. Since going back to a different version fixed it, and the "bottleneck" is in IDirect3DDevice9::Present lets go with the former option.
I'm not familiar with XNA so I don't know how much of the workings of D3D are exposed, but do you know what your PresentationParameters are set to?
Specifically try setting the swap effect set to Discard.

Resources