I'm trying to update an NSView's frame at every millisecond. It work for some frame but it blocks very quickly.
How should I do to make smooth updates ?
The WindowServer will only ever update the screen at a maximum of 60FPS (Unless you turn that off with Quartz Debug, but in general, it will be limited to 60FPS, and it will never be 1000FPS). Attempting to force redraw any more frequently than that is a waste of effort. I would expect, under normal circumstances, that calling -setFrame: on an NSView will cause -setNeedsDisplay: to be called, which means that your view will be redrawn the next time the WindowServer draws a frame, so even if you're calling -setFrame: 1000 times a second, it's not going to be drawing your view 1000 times per second. If you're seeing stuttering, I would bet that what's actually happening is that your view is taking more than 1/60th of a second to redraw. It's hard to do any non-trivial raster drawing (i.e. the kind you would be doing in -[NSView drawRect:]) in <1/60th of a second.
If you're trying to simply move the view (and you don't need to redraw it) you might try calling -setFrameOrigin and using Layer-backed views. I would expect AppKit/CoreAnimation to be able to re-position a layer-backed view (without re-rasterizing it) in <1/60th of a second (with ease).
If you want something more complex that simply re-positioning a view, and you want it to happen at the max frame rate (again, 60FPS) you are likely going to want to look into using OpenGL.
But really, the take home message here is "don't try to do stuff 1000 times per second." When the Leap Motion delegate method is called, update your view's position, and let the WindowServer do the rest at its own pace.
Related
I'm trying to implement a cross-platform UI library that takes as little system resource as possible. I'm considering to either use my own software renderer or opengl.
For stationary controls everything's fine, I can repaint only when it's needed. However, when it comes to implementing animations, especially animated blinking carets like the 'phase' caret in sublime text, I don't see a easy way to balance resource usage and performance.
For a blinking caret, it's required that the caret be redrawn very frequently(15-20 times per sec at least, I guess). On one hand, the software renderer supports partial redraw but is far too slow to be practical(3-4 fps for large redraw regions, say, 1000x800, which makes it impossible to implement animations). On the other hand, opengl doesn't support partial redraw very well as far as I know, which means the whole screen needs to be rendered at 15-20 fps constantly.
So my question is:
How are carets usually implemented in various UI systems?
Is there any way to have opengl to render to only a proportion of the screen?
I know that glViewport enables rendering to part of the screen, but due to double buffering or other stuff the rest of the screen is not kept as it was. In this way I still need to render the whole screen again.
First you need to ask yourself.
Do I really need to partially redraw the screen?
OpenGL or better said the GPU can draw thousands of triangles at ease. So before you start fiddling with partial redrawing of the screen, then you should instead benchmark and see whether it's worth looking into at all.
This doesn't however imply that you have to redraw the screen endlessly. You can still just redraw it when changes happen.
Thus if you have a cursor blinking every 500 ms, then you redraw once every 500 ms. If you have an animation running, then you continuously redraw while that animation is playing (or every time the animation does a change that requires redrawing).
This is what Chrome, Firefox, etc does. You can see this if you open the Developer Tools (F12) and go to the Timeline tab.
Take a look at the following screenshot. The first row of the timeline shows how often Chrome redraws the windows.
The first section shows a lot continuously redrawing. Which was because I was scrolling around on the page.
The last section shows a single redraw every few 500 ms. Which was the cursor blinking in a textbox.
Open the image in a new tab, to see better what's going on.
Note that it doesn't tell whether Chrome is fully redrawing the window or only that parts of it. It is just showing the frequency of the redrawing. (If you want to see the redrawn regions, then both Firefox and Chrome has "Show Paint Rectangles".)
To circumvent the problem with double buffering and partially redrawing. Then you could instead draw to a framebuffer object. Now you can utilize glScissor() as much as you want. If you have various things that are static and only a few dynamic things. Then you could have multiple framebuffer objects and only draw the static contents once and continuously update the framebuffer containing the dynamic content.
However (and I can't emphasize this enough) benchmark and check if this is even needed. Having two framebuffer objects could be more expensive than just always redrawing everything. The same goes for say having a buffer for each rectangle, in contrast to packing all rectangles in a single buffer.
Lastly to give an example let's take NanoGUI (a minimalistic GUI library for OpenGL). NanoGUI continuously redraws the screen.
The problem with not just continuously redrawing the screen is that now you need a system for issuing a redraw. Now calling setText() on a label needs to callback and tell the window to redraw. Now what if the parent panel the label is added to isn't visible? Then setText() just issued a redundant redrawing of the screen.
The point I'm trying to make is that if you have a system for issuing redrawing of the screen. Then that might be more prone to errors. Thus unless continuously redrawing is an issue, then that is definitely a more optimal starting point.
SceneKit calls its rendering delegates sixty times a second to allow a host application to adjust parameters in the contained scene to provide animation, physics, etc.
My scene is large (360,000 vertices). Almost all (~95%) of the scene is rotated slightly every minute (every 3,600 delegate calls). A very small remainder of the scene (about 300 nodes ~ 15,000 vertices) is moved once a second (every 60 delegate calls); all the nodes are created and their properties set before the application 'starts' (in viewDidLoad) and then only their positions are changed, as described above, in the delegate calls.
My frame refresh rate only just keeps at 60 fps and CPU usage is about 30% according to Xcode. All that effort is being expended in the rendering loop (there's no interaction; no other work) so I have two questions:
1) does 30% CPU seems reasonable, given this general description of my app? More specifically, since my delegate code seems simple and invoked from <2% of the rendering loops, could I really be driving SceneKit to its limits?
2) if so, are there any SceneKit tricks to clawing back some CPU? Can the delegate call rate be slowed, for example?
This is with macOS 10.12.3 and Xcode 8 (Swift 3) on a 2.8GHz/i7 2015 MacBook Pro
How about trying to flatten the nodes and it's children using flattenedClone.
Creating SCNLevelOfDetail objects for your geometries is worth a try.
Is anything else moving? Can you reduce your view's preferredFramesPerSecond?
I am drawing an animation using double-buffered GDI on a window, on a system where DWM composition is enabled, and seeing clearly visible tearing onscreen. Is there a way to prevent this?
Details
The animation takes the same image, and moves it right to left over the screen; the number of pixels across is determined by the difference between the current time and the time the animation started and the time to end, to get a fraction complete which is applied to the whole window width, using timeGetTime with a 1ms resolution. The animation draws in a loop without processing application messages; it calls the (VCL library) method Repaint which internally invalidates and then calls UpdateWindow for the window in question, directly calling into the message procedure with WM_PAINT. The VCL implementation of the paint handler uses BeginBufferedPaint. Painting is itself double-buffered.
The aim of this is to have as high a frame-rate as possible to get a smooth animation across the screen. (The drawing uses double-buffering to remove flickering and to ensure a whole image or frame is onscreen at any one time. It invalidates and updates directly by calling into the message procedure, without doing other message processing. Painting is implemented using modern techniques (eg BeginBufferedPaint) for Aero composition.) Within this, painting is done in a couple of BitBlt calls (one for the left side of the animation, ie what's moving offscreen, and one for the right side of the animation, ie what's moving onscreen.)
When watching the animation, there is clearly visible tearing. This occurs on Windows Vista, 7 and 8.1 on multiple systems with different graphics cards.
My approach to handle this has been to reduce the rate at which it is drawing, or to try to wait for VSync before painting again. This might be the wrong approach, so the answer to this question might be "Do something else completely: X". If so, great :)
(What I'd really like is a way to ask the DWM to compose / use only fully-painted frames for this specific window.)
I've tried the following approaches, none of which remove all visible tearing. Therefore the question is, Is it possible to avoid tearing when using DWM composition, and if so how?
Approaches tried:
Getting the monitor refresh rate via GetDeviceCaps(Application.MainForm.Handle, VREFRESH); sleeping for 1 / refresh rate milliseconds. Slightly improved over painting as fast as possible, but may be wishful thinking. Perceptually slightly less smooth animation rate. (Tweaks: normal Sleep and a high-resolution spin-wait using timeGetTime.)
Using DwmSetPresentParameters to try to limit updating to the same rate at which the code draws. (Variations: lots of buffers (cBuffer = 8) (no visible effect); specifying a source rate of monitor refresh rate / 1 and sleeping using the above code (the same as just trying the sleeping approach); specifying a refresh per frame of 1, 10, etc (no visible effect); changing the source frame coverage (no visible effect.)
Using DwmGetCompositionTimingInfo in a variety of ways:
While cFramesPending > 0, spin;
Get cFrame (frame composed) and spin while this number doesn't change;
Get cFrameDisplayed and spin while this doesn't change;
Calculating a time to sleep to by adding qpcVBlank + qpcRefreshPeriod, and then while QueryPerformanceCounter returns a time less than this, spin
All these approaches have also been varied by painting, then spinning/sleeping before painting again; or the reverse: sleeping and then painting.
Few seem to have any visible effect and what effect there is is hard to qualify and may just be a result of a lower frame rate. None prevent tearing, ie none make the DWM compose the window with a "whole" copy of the contents of the window's DC.
Advice appreciated :)
Since you're using BitBlt, make sure your DIBs are 4-bytes / pixel. With 3 bytes / pixel, GDI is horribly slow while DWM is running, that could be the source of your tearing. Another BitBlt issue I've run into, if your DIB is somewhat larger, than the BitBlt call make take an unexpectedly long time. If you split up one call into smaller calls than only draw a portion of the data, it might help. Both of these items helped me for my case, only because BitBlt itself was running too slow, thus leading to video artifacts.
this is an algorithm/data structure question about making different animations at the same time. For example, a ball is falling down one pixel in a millisecond, a bullet is moving 5 pixels in a ms, and a man is moving 1 pixel in 20 milliseconds. And think that there are hundreds of them together. What is the best way of putting all animations together, moving what we need to move in one function call, and removing the ones whose animation is completed? I don't want to create a thread for each one. What I want to do is to create one thread moving all items and sleeping until an object needs to be moved.
Note: I'm using Java/Swing, printing objects and images in JPanel.
I recently did something similar in Python. I don't know if this is the best method, but here's what I did.
Create an abstract Event class with the following public interface:
tick - calculates how much time has passed since the last tick. Perform work proportional to that time span. This should be called frequently to create the illusion of smooth movement; maybe sixteen times a second or so.
isDone - returns true when the Event has finished occuring.
Make a subclass of Event for anything that takes more than one frame to finish. Rotating, scaling, color changing, etc. You might create a TweenEvent subclass of Event if you want to move an image from one part of the screen to another. During each tick, redraw the image in a position farther away from the original position, and farther towards the destination position.
You can run many Events concurrently, like so:
Array<Event> events = new Array<Event>();
//add a bunch of TweenEvents here - one for a bullet, one for a ball, etc.
while(True){
Sleep(1/16);
for(Event e in events){
e.tick();
if (e.isDone()){events.remove(e);}
}
}
How does Flash deal with elements that are off-stage?
Obviously Flash doesn't actually render them (because they don't appear anywhere on-screen), but is the process of rendering them still existent, slowing down my game as much as it would if the elements were on-screen?
Or does Flash intelligently ignore elements who don't fall into a renderable area?
Should I manually manage removing objects off the DisplayList and adding them back on as the exit and enter the stage, or is this going to be irrelevant?
Yes, they are slowing down your game.
In one of my early experiments I've developed a sidescroller game with many NPCs scattered around the map, not all visible in the same screen. I still had to calculate stuff but they weren't on the screen. The performance was significantly better when I handled their removal off the display list when irrelevant (by simply checking their X in relation to the 'camera'). Again, I'm not talking about additional code and events that may be attached to them, just plain graphical children of a movieclip.
The best practice though, in my experience, is drawing the objects in bitmaps. Of course if you're too deep into your game already this may be irrelevant, but if you have the time to invest, this is one of the best ways to get the most out of AS3 regarding 2D games. I found some of the greatest tutorials regarding bitmaps and AS3 in 8bitrocket
http://www.8bitrocket.com/books/the-essential-guide-to-flash-games/ I can elaborate on the subject if you want, but I think I'm going off topic here.
Even if some display objects are out of the stage area, they are still executed. If they have any animation playing in them, that might slow down the performance.
The question arises, why do we need to keep unused items outside the stage area? if you need to 'cache' the movieClips for faster loading , then load them in a keyframe where the control will never go. for eg. load the display objects which you want to show in frame 1, then put a stop() in the actions panel of the frame, make it a key frame, and in frame 2 load the unused animations. since there is a stop() in frame 1, the control never goes in frame 2, but the display objects are cached.
Or, if you have codes in the unsused displayobjects, and thus need to load them along with the main game components, then, try putting stop() in the frames of the unused display objects so that they don't animate.