Actually I do have a big problem with performance in my game and also I have checked a lot of resources and I did what they said about this topic but unfortunately I still have a problem with that.
Also, I did all of these subjects below to solve my issue, but nothing happened
mapping(I set all of my GameObjects as static object)
re-size all of textures in the game
customize all of my Audio
used low poly objects in game
baking
checking profiler
on medium range phone frame rate is around 30-40 fps
also I de-active all of my objects in runtime (such as UI , 3D Models , Terrian and etc. ... ) just for testing to get know what the problem is ?
But the funny part is:
when I de-active them, my frame rate is still 35-45 FPS.
I'm really confused about that !!
update :
Also my game is for android platform
I uploaded some pictures of my game profiler
Cpu Profiler picture
Rendering Profiler picture
Cpu Timeline Profiler picture
Audio Profiler picture
UI Profiler picture
Physic Profiler picture
Global Illumination Profiler picture
Related
The animation in my 2D game is 24FPS. Is there any good reason not to set the game's target frame rate to 24FPS? Wouldn't that increase performance consistency and increase battery life on mobile? What would I be giving up?
You write nothing about the kind of game, but I will try to answer anyway.
Setting 24 FPS would indeed increase performance consistency and battery life.
The downside is, besides getting laggy visuals, an increased input lag. That will not only effect the 3D controls but also every UI-Button. Your game will feel a bit more laggy than other games, a very subtile feeling that will sum up after a while.
You could get away with 24, depending on the nature of your game, you should test it with different people. Some are more sensitive to that issue than others.
If you set up the animations to have their correct framerate, Unity will interpolate the animation to the games framerate. So there is no need to have the same values on animations and the game itself.
I am loading a ThreeJS scene on a website and I would like to optimize it depending on the capacity of the graphic card.
Is there a way to quickly benchmark the client computer and have some data that will let me decide how demanding or simple has to be my scene in order to run at a decent FPS ?
I am thinking of a benchmark library that can be easily plugged or a benchmark-as-a-service. And it has to run without the user noticing.
you can use stats.js to monitor performance. it is used in almost all threejs examples and is inluded in the treejs base.
the problem with this is that the framerate is locked to 60fps, so you cant tell how much ms get lost by vsynch
the only thing i found to be reliable is take the render time and increase quality if its under a limit, decrease it if it takes too long
I have a Qt application that is built around a QGraphicsView/Scene. The graphical performance is fine, animations are extremely smooth and a simple high res timer says the frames are drawing as fast as 400 fps. However, the application is always using 15% cpu according to task manager. I have run performance analysis on it in Visual Studio 2012 and it is showing that most of the samples are being taken in the QApplication::notify function.
I have set the viewport to render with a QGLWidget in hopes that offloading the drawing functions to the GPU would help, but that had no impact at all on CPU usage.
Is this normal? Is there something I can do to reduce CPU usage?
Well, there you have it - 400 FPS framerates. That loads one of your cores at 100%. There is a reason people usually cap framerates. The high framerates are putting a strain on the Qt event system which is driving the graphics.
Limit your frame rate to 60 FPS and problem solved.
I'm not updating the view unless an event occurs that updates an
individual graphicswidget
Do not update the scene for each and every scene element change. This is likely the cause of the overhead. You can make multiple scene item changes, but render the scene at a fixed rate.
Also, I notice you said graphicswidget - which I assume is QGraphicsWidget - this might be problematic too. QObject derived classes are a little on the heavy side, and the Qt event system comes with overheads too, which is the reason the regular QGraphicsItem is not QObject derived. If you use graphics widgets excessively, that might be a source of overhead, so see if you can get away with using the lighter QGraphicsItem class and some lighter mechanism to drive your scene.
I've been developing a virtual camera app for depth cameras and I'm extremely interested in the Tango project. I have several questions regarding the cameras on board. I can't seem to find these specs anywhere in the developer section or forums, so I understand completely if these cant be answered publicly. I thought I would ask regardless and see if the current device is suitable for my app.
Are the depth and color images from the rgb/ir camera captured simultaneously?
What frame rates is the rgb/ir capable of? e.g. 30, 25, 24? And at what resolutions?
Does the motion tracking camera run in sync with the rgb/ir camera? If not what frame rate (or refresh rate) does the motion tracking camera run at? Also if they do not run on the same clock does the API expose a relative or an absolute time stamp for both cameras?
What manual controls (if any) are exposed for the color camera? Frame rate, gain, exposure time, white balance?
If the color camera is fully automatic, does it automatically drop its frame rate in low light situations?
Thank you so much for your time!
Edit: Im specifically referring to the new tablet.
Some guessing
No, the actual image used to generate the point cloud is not the droid you want - I put up a picture on Google+ that shows what you get when you get one of the images that has the IR pattern used to calculate depth (an aside - it looks suspiciously like a Serpinski curve to me
Image frame rate is considerably higher than point cloud frame rate, but seems variable - probably a function of the load that Tango imposes
Motion tracking, i.e. pose, is captured at a rate roughly 3x the pose cloud rate
Timestamps are done with the most fascinating double precision number - in prior releases there was definitely artifacts/data in the lsb's of the double - I do a getposeattime (callbacks used for ADF localization) when I pick up a cloud, so supposedly I've got a pose aligned with the cloud - images have very low timestamp correspondance with pose and cloud data - it's very important to note that the 3 tango streams (pose,image,cloud) all return timestamps
Don't know about camera controls yet - still wedging OpenCV into the cloud services :-) Low light will be interesting - anecdotal data indicates that Tango has a wider visual spectrum than we do, which makes me wonder if fiddling with the camera at the point of capture to change image quality, e.g. dropping the frame rate, might not cause Tango problems
I am developing a small game for Windows Phone which is based on silverlight animation.
Some animations are using silverlight animation framework like Trandforms API and some animations are frame based. What I am doing is, I am running a Storyboard having very small duration and when it;s completed event fires, I am changing image frame there. So images get replaced every time completed event get fired. But I think it is causing memory leakage in my game and memory footprint is increasing with time.
I want to ask is it a right way to do frame base animations or is there any better way to do this in silverlight???
What I can do to reduce memory consumption so that it does not increase with time.
As a general rule, beware animating anything which can't be GPU accelerated or bitmap cached. You haven't given enough information to tell if this is your issue but start by monitoring the frame rate counters, redraw regions and cache visualisation.
You can detect memory leaks with the built in profiling tools.
See DEBUG > Start Windows Phone Application Analysis