The UWP version of our app runs with a much slower framerate (6 fps vs 24 fps) compared to the desktop equivalent. Note that both versions were tested on the same hardware.
Both versions are built using SharpDX, the only difference is how the RenderTargets are set up. The Windows app uses an HwndRenderTarget, and the UWP app uses a SurfaceImageSource brush that paints into a Rectangle.
We've narrowed the main culprit (on the CPU side at least) to FillGeometry, which consumes a lot of the time on UWP.
Is there a reason why FillGeometry would take much longer in the above UWP configuration compared to desktop?
Note: The rendering code is identical on both, so avoid suggestions which impact both implementations equally, such as using GeometryRealization instead of Geometry. We're looking for the reason for the difference between the rendering performance on UWP and desktop.
If there are factors other than Geometry that might be affecting performance, it would be useful to know those as well, since our profiling tools might not be altogether precise.
One of the factors seems to be that internal Direct2D clipping works differently in these cases.
Our scene has hundreds of geometries, and our initial code did not clip to the viewport, and relied instead on Direct2D doing the clipping. This resulted in the differences in frame rates mentioned in the original post.
When explicit clipping was added, the frame rate for the UWP version increased to around 16fps (still less than that of the desktop app) while the frame rate of the desktop version was not affected much.
So at this point it is a hypothesis that different clipping routines are at work in these two cases.
This isn't fully solved, as we still have a significant difference in frame rates. But it's a start.
Related
Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.
I have a Qt application that is built around a QGraphicsView/Scene. The graphical performance is fine, animations are extremely smooth and a simple high res timer says the frames are drawing as fast as 400 fps. However, the application is always using 15% cpu according to task manager. I have run performance analysis on it in Visual Studio 2012 and it is showing that most of the samples are being taken in the QApplication::notify function.
I have set the viewport to render with a QGLWidget in hopes that offloading the drawing functions to the GPU would help, but that had no impact at all on CPU usage.
Is this normal? Is there something I can do to reduce CPU usage?
Well, there you have it - 400 FPS framerates. That loads one of your cores at 100%. There is a reason people usually cap framerates. The high framerates are putting a strain on the Qt event system which is driving the graphics.
Limit your frame rate to 60 FPS and problem solved.
I'm not updating the view unless an event occurs that updates an
individual graphicswidget
Do not update the scene for each and every scene element change. This is likely the cause of the overhead. You can make multiple scene item changes, but render the scene at a fixed rate.
Also, I notice you said graphicswidget - which I assume is QGraphicsWidget - this might be problematic too. QObject derived classes are a little on the heavy side, and the Qt event system comes with overheads too, which is the reason the regular QGraphicsItem is not QObject derived. If you use graphics widgets excessively, that might be a source of overhead, so see if you can get away with using the lighter QGraphicsItem class and some lighter mechanism to drive your scene.
Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...
I've been playing around with graphics in android and I noticed that it takes a lot of time and resources to draw a bitmaps with the canvas. Especially in high end games which require many images to be drawn at once, this could be pretty bad for things such as the framerate. If I decide to learn and use openGL, would it make a big difference? Or maybe I'm not using the canvas right?
It depends on what version of Android you're talking about.
In android version 2.X, all canvas operations are not hardware accelerated, so it's not using the GPU at all, and it processes everything pixel by pixel on the CPU.
In either Android 3 or 4 (I forget which one exactly), hardware acceleration was added to canvas so that you could have a GPU accelerated canvas.
OpenGLES always uses hardware acceleration, so for android 2.X, it will always be much much faster than a canvas (this is your only real option for any kind of game that needs a reasonable framerate).
In hardware accelerated android, you probably won't notice much of a difference between canvas and OpenGL, because they both leverage the GPU, provided that your canvas has hardware acceleration enabled.
(Sorry if I missed the answer)
So I just started out making an XNA game for windows.
And while designing the UI, I was wondering how to scale the UI at different resolutions.
So, imagine that I make a UI for a 1920*1080 screen, how do I make sure this is displayed correctly on a smaller 4*3 screen?
Thanks in advance!
Simon.
Usually one designs the GUI in a way that will be usable in the lowest resolution your game offers (traditionally 800x600), then you are sure everything fits correctly at all resolutions.
This is usually why games at higher resolutions seem to have a lot more space for the "playfield" and less for UI than at smaller resolutions.
You could scale the UI once the resolution is higher as well, as easy as using the scale parameter on SpriteBatch.Draw, or you could do it a bit more smartly, by having your assets 9-sliced and aligned to WIDTH/HEIGHT constant percentages.