The animation in my 2D game is 24FPS. Is there any good reason not to set the game's target frame rate to 24FPS? Wouldn't that increase performance consistency and increase battery life on mobile? What would I be giving up?
You write nothing about the kind of game, but I will try to answer anyway.
Setting 24 FPS would indeed increase performance consistency and battery life.
The downside is, besides getting laggy visuals, an increased input lag. That will not only effect the 3D controls but also every UI-Button. Your game will feel a bit more laggy than other games, a very subtile feeling that will sum up after a while.
You could get away with 24, depending on the nature of your game, you should test it with different people. Some are more sensitive to that issue than others.
If you set up the animations to have their correct framerate, Unity will interpolate the animation to the games framerate. So there is no need to have the same values on animations and the game itself.
Related
I have a game written with SpriteKit which uses a SKEffectNode with blur effect to blur a set of sprites, one of which has a fairly large texture, and which together cover a fairly large area of the screen. An iMac and Mac Book Pro cope quite happily with this but on a more humble Mac Book there is a notable drop in frame rate with the effect node added in. Since the effect isn't crucial to the functionality of the game, I could simply not add the SKEffectNode for machines with less powerful graphics capabilities.
So then the question: what would be a good programmatic check that I could make to determine the "power of the GPU" or "performance when applying texture effects" or [suggest better metric here] and via what API? Thanks for your suggestions!
You'll have to create a performance test using your actual blurring processes and some sample content to get an accurate idea of the time cost of it on each generation of hardware.
Blurs are really weird things, programmatically. A Box Blur can give you most of the appearance of a nice, soft gaussian blur for much less processing cost. A zoom or motion blur (that looks good) is surprisingly expensive, even on strong hardware.
And there's some amazingly effective "cheats" when doing blurs. Because there's no need for detail you can heavily optimise the operations, particularly if the blurs are strong.
Apple, it's believed, does something like this, for example, with its blurs:
Massively shrink the target image
Do a gaussian blur on this tiny image
Scale it back up, somewhat
Apply a cheap Box Blur to soften it
Fully scale back to the desired size
By way of terrible example benefitting from scaling well (with filtering set for good scaling)
This is the full sized image blurred:
And here's a version of the same image, scaled to a 16th of its original size, blurred, and then the blurred image scaled back up. As you can see, due to the good scaling and lack of detail, there's hardly any difference in the blurred image, but the blur takes MUCH less processing energy and time:
I'm using the Kinect 2 to perform rotation and zooming of a virtual camera showing on an 3D object by moving the hand in all three directions. The problem I currently tackle with is that these operations are executed with some noticeable delay. If my hand is in a steady position again, the camera still continues to move for a short time. It feels like if I push the camera instead of control them in real time. Perhaps the frame rate is a problem. As far as I know the Kinect has 30 FPS while my application has 60 FPS (VSync enabled).
What could be the cause for this issue? How can I control my camera without any significant delay?
The Kinect is a very graphic and process intensive piece of hardware. For your application I'd suggest a minimum specification of a GTX960 and a 4th gen i7 processor. Your hardware will be a prime factor in how fast you can compute Kinect data.
You're also going to want to avoid using loops as much as possible and instead rely on multi threading and if you are looping be certain that there are no foreach loops as they take longer to execute. It is very important that your code is running the data read from the Kinect and the position command asynchronously.
The Kinect will never be real time responsive. There is just too much data that it is handling, the best you can do is optimize your code and increase your hardware power to shrink the response time.
I'm programming a 2D game with HTML5 canvas. so it won't take a lot of time for rendering all the objects and frames.
I want to ask about both 2D and 3D games.
Suppose that I made a change or more on one of the objects.should I render all the objects when I only need to redraw that object? is rendering all objects and frames is the only option? and if the game was 3D, won't rendering take a lot of time? specially on slow internet connections
So [performance] depends only on the browser and computer of the user.
If you are making a game which is played simultaneously across several computers then the connection speed is important. Otherwise, the connection speed is mostly irrelevant (except as it relates to initially downloading your game resources). A better CPU/GPU, memory on the local computer will most affect your app performance.
About performance
In theory, you get better performance if you redraw only those game items which have changed since the last frame.
In practice, redrawing one game item often requires redrawing multiple other game items. You might want to redraw the player who moved, but in doing so you must also redraw the background plus any other game items colliding with the player.
As a result, it's often better to clear the entire canvas and redraw all the game items in their current position. So the unsatisfying answer to your question is that you should test your unique game app to determine if you can efficiently redraw just changed items or if a complete redrawing is faster.
Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...
Since having blends is hitting perfomance of our game, we tried several blending strategies for creating the "illusion" of blending. One of them is drawing a sprite every odd frame, resulting in the sprite being visible half of the time. The effect is quit good. (You'd need a proper frame rate by the way, else your sprite would be noticeably flickering)
Despite that, I would like to know if there are any good insights out there in avoiding blending in order to better the overal performance without compromising (too much) of the visual experience.
Is it the actual blending that's killing your performance? (i.e. video memory bandwidth)
What games commonly do these days to handle lots of alpha blended stuff (think large explosions that cover whole screen): render them into a smaller texture (e.g. 2x2 smaller or 4x4 smaller than screen), and composite them back onto the main screen.
Doing that might require rendering depth buffer of opaque surfaces into that smaller texture as well, to properly handle intersections with opaque geometry. On some platforms (consoles) doing multisampling or depth buffer hackery might make that a very cheap operation; no such luck on regular PC though.
See article from GPU Gems 3 for example: High-Speed, Off-Screen Particles. Christer Ericson's blog post overviews a lot of optimization approaches as well: Optimizing the rendering of a particle system
Excellent article here about rendering particle systems quickly. It covers the smaller off screen buffer technique and suggest quite a few other approaches.
You can read it here
It is not quite clear from your question what kind of application of blending hits your game's performance. Generally blending is blazingly fast. If your problems are particle system related, then what is most likely to kill framerate is the number and size of particles drawn. Particularly lots of close up (and therefore large) particles will require high memory bandwidth and fill rate of the graphics card. I have implemented a particle system myself, and while I can render tons of particles in the distance, I feel the negative impact of e.g. flying through smoke (that will fill the entire screen because the viewer is amidst of it) very much on weaker hardware.