WP 7/8 + xna games by default set to 30 FPS, I would like to know If i set it to 60 FPS , any disadvntages / Performence issues / bugs / any other. becuase i always wants to play/develop the game in 60 fps.
I just want to add a few points to Strife's otherwise excellent answer.
Physics simulations are generally more stable at higher FPS. Even if your game runs at 30 FPS, you can run your physics simulation at 60 FPS by doing two 1/60th of a second updates each frame. This gives you a better simulation, but at a high CPU cost. (Although, for physics, a fixed frame-rate is more important than a high frame rate.)
If you poll input 30 times per second, instead of 60, you will miss extremely fast changes in input state, losing some input fidelity.
Similarly, your frame rate affects the input-output latency. At a lower frame-rate it will take longer for input change to be reflected on-screen, making your game feel less responsive. This can also be an issue for audio feedback - particularly in musical applications.
Those last two points are only really important if you have a very twitchy game (Super Hexagon is a fantastic example). Although I must admit I don't know how fast the touch input on WP7/8 is actually refreshed - it's difficult information to find.
Windows Phone 7 SDK sets XNA to 30 FPS because the screen on Windows Phone 7 Devices has a 30hz refresh rate. This means the screen refreshes at 30 times a second. If you are drawing 30 times a second and you refresh 30 times a second your at the optimal rate of smoothness for that device.
The reason most people aim for 60 (or on my gaming PC, 120) is because most monitors have a 60hz refresh rate (some are now 120hz). If your FPS is HIGHER than your refresh rate you won't notice see anything else except for possible an effect known as "Screen-Tearing" which is what happens when you render more frames in a second than your screen refreshes.
In other words imagine you draw to the screen two times and then your screen refreshes once, why did you bother drawing the second time? You waste battery life, cpu usage, and gpu usage when you render faster than the refresh rate of a device. So my advice to you if your sticking with XNA is that you stick with 30 FPS because the older devices won't get any benefit by having more frames rendered and if anything you'll get graphical anomalies like screen tearing.
If you plan to target higher-end (and newer) windows phone 8 devices, drop XNA, go the Direct3D route and use Microsoft's "DirectX Toolkit" because it includes XNA's "graphics functions" like spritebatch but in C++ instead of C#.
I hope this helps.
Related
I'm studying Analyze runtime performance using the demo https://googlechrome.github.io/devtools-samples/jank/
When I open FPS meter (using Frame Rendering Stats in Rendering tab), it shows 10.8 fps:
However, when I record using Performance, there are lots of red blocks and a few green blocks. Every 20 red blocks, one green block. But in every block it shows 60 fps, which is totally wrong
As the red blocks are dropped, and I have noticed that during the 20 red blocks, the animation has never change. So I just imagine maybe the real value is 60 / 20 = 3 fps. This is reasonable as actually there are only 3 green blocks inside 1000ms. But that is still not equal to 10.8 fps. It really make me confused!
Supplement:
In the document it shows 12 fps(see the image here), but it doesn't show any red dropped frames. Maybe it is due to the dropped frames are brought in 2020(https://docs.google.com/document/d/1-KP3fAjemdm7lnvCll9T1Lg-ikCzRPI-oNCp8fHAQCc/edit#heading=h.neguedjcao67), and the document is actually outdated?
I tried https://googlechrome.github.io/devtools-samples/jank/ and set the CPU "6x slowdown" in the Performance tab. The animation became significantly janky, while the Frame Rate was still very close to 60 fps.
It seems that the FPS meter is not accurate.
The animation in my 2D game is 24FPS. Is there any good reason not to set the game's target frame rate to 24FPS? Wouldn't that increase performance consistency and increase battery life on mobile? What would I be giving up?
You write nothing about the kind of game, but I will try to answer anyway.
Setting 24 FPS would indeed increase performance consistency and battery life.
The downside is, besides getting laggy visuals, an increased input lag. That will not only effect the 3D controls but also every UI-Button. Your game will feel a bit more laggy than other games, a very subtile feeling that will sum up after a while.
You could get away with 24, depending on the nature of your game, you should test it with different people. Some are more sensitive to that issue than others.
If you set up the animations to have their correct framerate, Unity will interpolate the animation to the games framerate. So there is no need to have the same values on animations and the game itself.
We are attempting to display 40 windows, each containing almost the same UI, but displaying a different entity, with scene changes with animations happening every few seconds on each window. All of our tests have ended with the windows being drawn at 3 to 5 frames per second at between 10 and 20 windows open, on a reasonably old and low-powered discrete nVidia GPU and on Windows.
Things we have tried:
Disabling animations - performance improves, but not nearly enough
CPU profiling - shows over 90% of the CPU time being spent in the system DLLs and the nVidia driver; on other machines, the CPU usage is not significant, but the frame rate is still low.
QML profiling - The QT Creator profiling shows the render loop executing its steps at 60 fps and within a couple of milliseconds at most for each frame.
Images/textures are loaded once to the GPU and never reloaded
All the rendering backends - OpenGL, ANGLE and Qt Quick 2D Renderer - perform more or less the same
Is there something major we are missing?
I'm developing a card game in Android using SurfaceView and canvas to draw the UI.
I've tried to optimize everything as much as possible but I still have two questions:
During the game I'll need to draw 40 bitmaps (the 40 cards in the italian deck), is it better to create all the bitmaps on the onCreate method of my customized SurfaceView (storing them in an array), or create them as needed (every time the user get a new card for example)?
I'm able to get over 90 fps on an old Samsung I5500 (528 MHz, with a QVGA screen), 60 fps on an Optimus Life (800 MHz and HVGA screen) and 60 fps with a Nexus One/Motorola Razr (1 GHz and dual core 1GHz with WVGA and qHD screens) but when I run the game on an Android tablet (Motorola Xoom dual core 1 GHz and 1 GB of Ram) I get only 30/40 fps... how is that possible that a 528 MHz cpu with 256 MB of RAM can handle 90+ fps and a dual core processor can't handle 60 fps? I'm not seeing any kind of GC calling at runtime....
EDIT: Just to clarify I've tried both ARGB_888 and RGB_565 without any changes in the performance...
Any suggestions?
Thanks
Some points for you to consider:
It is recommended not to create new objects while your game is running, otherwise, you may get unexpected garbage collections.
Your FPS numbers doesn't sound good, you may have measurement errors, However my guess is that you are resizing the images to fit the screen size and that affects the memory usage of your game and may cause slow rendering times on tablets.
You can use profiling tools to confirm: TraceView
OpenGL would be much faster
last tip: don't draw overlapping cards if you can, draw only the visible ones.
Good Luck
Ok so it's better to create the bitmap in the onCreate method, that is what I'm doing right now...
They are ok, I believe that the 60 fps on some devices are just some restrictions made by producers since you won't find any advantage in getting more than 60 fps (I'm making this assumption since it doesn't change rendering 1 card, 10 cards or no card... the OnDraw method is called 60 times per second, but if I add for example 50/100 cards it drops accordingly) I don't resize any card cause I use the proper folder (mdpi, hdpi, ecc) for each device, and I get the exact size of the image, without resizing it...
I've tried to look at it but from what I understand all the time of the app execution is used to draw the bitmap, not to resize or update its position here it is:
I know, but it would add complexity to the developing and I believe that using a canvas for 7 cards on the screen should be just fine….
I don't draw every card of the deck.. I just swap bitmap as needed :)
UPDATE: I've tried to run the game on a Xoom 2, Galaxy Tab 7 plus and Asus Transformer Prime and it runs just fine with 60 fps…. could it be just a problem of Tegra 2 devices?
I'm trying to write an application to display multiple video streams, all updating at 25 or 30 images per second. The images are being rendered into WPF controls using Direct3D and some Interop to avoid using a Winforms control. As more video streams are added, the frame rate of each control drops yet the CPU on my machine only ever reaches about 50%.
Using the Microsoft WPF Performance Suite - Perforator tool, it would appear that when the frame rate on the video streams starts to drop, the 'Dirty Rect Addition Rate' levels out like it has reached a maximum for the video card.
There is no software rendering activity in the application so it would appear that overall performance is being limited by the graphics card's ability to update the Dirty Rectangles.
Therefore, is there a feature or performance parameter that can be used to determine the best video card to buy in order to maximise performance for my application?
Either that, or is there a set of graphics cards settings that will boost performance?
Currently running with an ATI FirePro V4800 that will happily run 16 streams of H264 video at a resolution of 4CIF but looking for the ability to run up to 32.