GUI blocked during Canvas update - how to optimize this? - performance

I am drawing about 12.000 objects to a JavaFX 2.2 Canvas. The GUI blocks for around 2 seconds, while measuring the execution time of my code claims less than half a second execution time for my code.
So I am wondering how the measured execution time of my code can be so far shorter than the time the GUI is blocked - I guess there's some kind of buffering on the Canvas, so things are first written to the buffer and processed later? So after ~0.5 seconds, everything I drew to the Canvas was only written to the buffer?
Assuming everything is buffered first leads to my next question: Isn't the drawing of the things in the buffer always done on the UI-Thread, so I can't optimize the timespan where the GUI is blocked through to drawing things from the buffer? Even if I would draw to the Canvas from the Application thread, when the buffer is still processed on the UI thread, then I won't get rid of this 1,5 seconds blocking of the GUI?
Thanks for any hint!
Update: Pseudocode:
long start = System.nanoTime();
// Using GraphicsContext, draw ~ 12.000 arc parts (GraphicsContext.drawArc method)
// to a Canvas
long end = System.nanoTime();
double elapsedTime = (end-start)/1000000000.0; //in seconds
System.out.println("elapsed time: " + elapsedTime); // something around 0.5, however the GUI hangs for around 2 seconds - where do the additional 1.5 seconds come from?
I am doing everything on the application thread, so I understand that the GUI hangs for the 0.5 seconds during which I am adding stuff to the Canvas. However I can't understand, why the GUI hangs for ~2 seconds, when my drawing to the Canvas finished after 0.5 seconds?

Related

scratch timer method not accurate

So I'm making a game that depends greatly on time and how fast a person responds and I noticed a little bug where it has a randomized time. For example: I have the method
wait pick random .01 to 3 seconds
set ghost effect to 0
reset timer
repeat 5
wait .05 seconds
change ghost effect by 20
and each time I run this, I get different times. It can't be the fact that I'm randomizing the wait time because the reset timer method is after that block. I ran some tests and I concluded that 7 out of the 12 times i got 0.8 seconds, which is what I'm trying to get. 3 out of the 12 times, I got 0.7, and 2 out of the 12 times I got 0.6. If there is any way to make the timer more accurate or improve my code to reduce lag, it would be much appreciated.
One general solution would be to make how ghosted the sprite is be a "function" based on how long since the animation started. This would work something like this:
This animation animates the sprite changing all the way around the color effect wheel across four seconds. It'll be most clear how that works if we go over it step by step.
We want our animation to last for four seconds. But how do we actually make it last that long? As was seen in your script, just using a "repeat (40): wait 0.1 seconds" doesn't always result in waiting exactly four seconds. Instead, we use a "repeat until" loop: "repeat until (timer - animation start) > 4".
We get the "animation start" variable just by setting it to the current timer value when we start the animation. We'll see "timer - animation start" again later in; what it really means is the progress through the animation: "timer - animation start" starts at zero and gradually increases to four as the animation runs. (Of course, when it reaches 4, we want to stop the animation, and that's why we use the "repeat until" block.)
Here's the big question: how, given the current amount of time through the animation, can we decide what the color effect should be? It turns out, that's not so hard to answer, but we do need to think through it since it takes math. We want to transition from 0 to 200 over a period of 4 seconds. You can write that down as a rate: 200 units per 4 seconds, so, 200 / 4. Then we just multiply that rate by how far through the animation we are: (200 / 4 * progress). Progress is easy to get, again; we just reuse the "timer - animation start" blocks.
Do you believe that I'm right? Here's a list with some numbers to convince you (but really, you should try out this script yourself!):
0s: (timer - animation start) = 0, color effect = (200/4) * 0 = 0. This is the starting point of the animation, so it makes sense for the color effect to be zero.
1s: (timer - animation start) = 1, color effect = (200/4) * 1 = 50.
2s: (timer - animation start) = 1, color effect = (200/4) * 2 = 100. Half way through the 4-second long animation: since we're transitioning from 0 to 200, it makes sense to now be at 100.
3.5s: (timer - animation start) = 3.5, color effect = (200/4) * 3.5 = 175.
4s: (timer - animation start) = 4, color effect = (200/4) * 4 = 200. Now that we're at the end, we've transitioned all the way to 200.
To try this yourself, I do recommend implementing some "artificial lag". That just means adding a "wait (random 0.1 - 0.3) seconds" block to simulate the lag that might show up in a very complex project or on a slow computer.
Since we're just dealing with a basic math formula, it's very easy to change the numbers to get a different result. Here is a script which transitions from 0 to 100 over 2 seconds:
But here's a point where you might find a "gotcha" -- look at what happens if you add artificial lag:
The cat turns ghost-y.. but doesn't ENTIRELY disappear! Yikes! So, what caused this?
Here's the problem: The animation stops before (timer - animation start) is exactly 2 seconds. So, we never run that 2s step, where the ghost effect would be 100 - and we're left with a sprite that is not entirely ghosted.
Luckily, the solution is simple. Just make sure you additionally switch to the final state after the animation has ended. Of course, that just means attaching the appropriate "set effect" block right after the "repeat until" loop:
Now the sprite's ghost effect will be set to 100 immediately after the loop, regardless of whatever it ended with.
By the way, when you tested these scripts out for yourself - you did, didn't you? - did you notice that this animates very smoothly? In fact, it animates as smoothly as possible. It will always animate at the top framerate that the user's computer can handle, since it runs every Scratch frame (there is no "wait n seconds" block)! Also, a simple test of your understanding - how can you re-implement the "glide (n) seconds to (x) (y)" block using this? It's definitely possible!

Is it possible to prevent tearing artifacts when drawing using GDI on a window with DWM composition?

I am drawing an animation using double-buffered GDI on a window, on a system where DWM composition is enabled, and seeing clearly visible tearing onscreen. Is there a way to prevent this?
Details
The animation takes the same image, and moves it right to left over the screen; the number of pixels across is determined by the difference between the current time and the time the animation started and the time to end, to get a fraction complete which is applied to the whole window width, using timeGetTime with a 1ms resolution. The animation draws in a loop without processing application messages; it calls the (VCL library) method Repaint which internally invalidates and then calls UpdateWindow for the window in question, directly calling into the message procedure with WM_PAINT. The VCL implementation of the paint handler uses BeginBufferedPaint. Painting is itself double-buffered.
The aim of this is to have as high a frame-rate as possible to get a smooth animation across the screen. (The drawing uses double-buffering to remove flickering and to ensure a whole image or frame is onscreen at any one time. It invalidates and updates directly by calling into the message procedure, without doing other message processing. Painting is implemented using modern techniques (eg BeginBufferedPaint) for Aero composition.) Within this, painting is done in a couple of BitBlt calls (one for the left side of the animation, ie what's moving offscreen, and one for the right side of the animation, ie what's moving onscreen.)
When watching the animation, there is clearly visible tearing. This occurs on Windows Vista, 7 and 8.1 on multiple systems with different graphics cards.
My approach to handle this has been to reduce the rate at which it is drawing, or to try to wait for VSync before painting again. This might be the wrong approach, so the answer to this question might be "Do something else completely: X". If so, great :)
(What I'd really like is a way to ask the DWM to compose / use only fully-painted frames for this specific window.)
I've tried the following approaches, none of which remove all visible tearing. Therefore the question is, Is it possible to avoid tearing when using DWM composition, and if so how?
Approaches tried:
Getting the monitor refresh rate via GetDeviceCaps(Application.MainForm.Handle, VREFRESH); sleeping for 1 / refresh rate milliseconds. Slightly improved over painting as fast as possible, but may be wishful thinking. Perceptually slightly less smooth animation rate. (Tweaks: normal Sleep and a high-resolution spin-wait using timeGetTime.)
Using DwmSetPresentParameters to try to limit updating to the same rate at which the code draws. (Variations: lots of buffers (cBuffer = 8) (no visible effect); specifying a source rate of monitor refresh rate / 1 and sleeping using the above code (the same as just trying the sleeping approach); specifying a refresh per frame of 1, 10, etc (no visible effect); changing the source frame coverage (no visible effect.)
Using DwmGetCompositionTimingInfo in a variety of ways:
While cFramesPending > 0, spin;
Get cFrame (frame composed) and spin while this number doesn't change;
Get cFrameDisplayed and spin while this doesn't change;
Calculating a time to sleep to by adding qpcVBlank + qpcRefreshPeriod, and then while QueryPerformanceCounter returns a time less than this, spin
All these approaches have also been varied by painting, then spinning/sleeping before painting again; or the reverse: sleeping and then painting.
Few seem to have any visible effect and what effect there is is hard to qualify and may just be a result of a lower frame rate. None prevent tearing, ie none make the DWM compose the window with a "whole" copy of the contents of the window's DC.
Advice appreciated :)
Since you're using BitBlt, make sure your DIBs are 4-bytes / pixel. With 3 bytes / pixel, GDI is horribly slow while DWM is running, that could be the source of your tearing. Another BitBlt issue I've run into, if your DIB is somewhat larger, than the BitBlt call make take an unexpectedly long time. If you split up one call into smaller calls than only draw a portion of the data, it might help. Both of these items helped me for my case, only because BitBlt itself was running too slow, thus leading to video artifacts.

Custom vsync Algorithm

When creating a game with any programming language (that can), it is important to have a fixed target frame rate the game will redraw the screen at, however some languages either do not have a sync function or timers are unreliable so is there any method of keeping the frame rate steady manually with only math and/or by sleeping the thread? Maybe using a frame delta?
So far I have only tried 'sleep(targetframerate - (targetframerate-delta))'
This is supposed to realise that the previous frame took longer than the target so then compensates by making the next frame sooner, however it effects itself and simply kills the frame rate reverse exponentially.
These built-in sync functions must be using some sort of math in a method like this to steady the frame rate. How is it done in high-end APIs such as OpenGL?
Create a timer that runs very quickly such as every millisecond (explained why later) and declare these three variables:
private int targetMillis = 1000/60,
lastTime = (int)System.currentTimeMillis(),
targetTime = lastTime+targetMillis;
targetMilis is the desired amount of milliseconds between each frame. Change 60 to desired frame rate.
lastTime is simply when the last frame was, to compare how long its been. Set to now.
targetTime is what time the next frame is due. Now + targetMillis.
An optional timerScaler can be added also, to scale any movements so they don't slow down because the frame rate has with:
public static float timeScaler = 1;
Then on each tick of the timer, run this code that will check if it's time for the next frame and to set up the next - taking into account if the frame is late and making the next one sooner appropriately.
int current = (int)System.currentTimeMillis(); // Now time
if (current < targetTime) return; // Stop here if its not time for the next frame
timeScaler = (float)targetMillis/(current-lastTime);
//Scale game on how late frame is.
lastTime = current;
targetTime = (current+targetMillis)-(current-targetTime);
// Create next frame where it should be (in targetMillis) and subtract if frame was late.
[Game code here.]
One would assume if we needed a frame every targetMillis we can just create a timer for that time, however as you said they can be a few milliseconds out therefore if we still used this method and did offset the targetTime a few back it wouldn't matter as it would always overshoot, therefore the speed of the timer is more of a resolution of accuracy.

CAAnimation that calls a method in periodic animation-progress intervals?

Say I want to animate a ball rolling 1000 pixels to the right, specifying a timing function in the process – something like this:
UIView *ball = [[UIView alloc] initWithFrame:CGRectMake(0,0,30,30)];
CABasicAnimation* anim =
[CABasicAnimation animationWithKeyPath:#"transform.translation.x"];
anim.toValue = [NSNumber numberWithFloat:ball.frame.origin.x + 1000.0];
// move 1000 pixels to the right
anim.duration = 10.0;
anim.timingFunction = [CAMediaTimingFunction functionWithControlPoints:
0.1 :0.0 :0.3 :1.0]; // accelerate fast, decelerate slowly
[ball.layer addAnimation:anim forKey:#"myMoveRightAnim"];
What I ultimately want is to have a method, say -(void)animationProgressCallback:(float)progress, be called during the animation, in regular intervals of the animation's progress in terms of the absolute "distance" between start and end values, i.e. ignoring the timing function.
I'll try to explain with the above example with the ball rolling 1000px to the right (charted by the y axis, in our case 100%=1000px):
I want my callback method to be invoked whenever the ball has progressed 250 pixels. Because of the timing function, the first 250 pixels might be reached in ti0=2 seconds, half the total distance reached just ti1= 0.7 seconds later (fast acceleration kicks in), the 750px mark another ti2= 1.1 seconds later, and needing the remaining ti3= 5.2 seconds to reach the 100% (1000px) mark.
What would be great, but isn't provided:
If the animation called a delegate method in animation-progress intervals as described, I wouldn't need to ask this question… ;-)
Ideas how to solve the problem:
One solution I can think of is to calculate the bezier curve's values, map that to the tik values (we know the total animation duration), and when the animation is started, we sequentially perform our animationProgresssCallback: selector with those delays manually.
Obviously, this is insane (calculating bezier curves manually??) and, more importantly, unreliable (we can't rely on the animation thread and the main thread to be in sync – or can we?).
Any ideas??
Looking forward to your ideas!
Ideas how to solve the problem:
One solution I can think of is to
calculate the bezier curve's values,
map that to the tik values (we know
the total animation duration), and
when the animation is started, we
sequentially perform our
animationProgresssCallback: selector
with those delays manually.
Obviously, this is insane (calculating
bezier curves manually??) and, more
importantly, unreliable (we can't rely
on the animation thread and the main
thread to be in sync – or can we?).
Actually this is reliable. CoreAnimation is time based, so you could use the delegate to be notified when the animation really starts.
And about calculating the bezier path... well look it this way: It could be worse if you would want to implement a surface in OpenGLES you would have to calculate a Cubic Bezier!. lol. Your case is only one dimension, is not that hard if you know the maths.

Smooth animations using GTK+

I'm creating a network animator (similar to nam, if you have used it before).
Basically, I have nodes represented as small dots on a GTK+ DrawingArea, and I update the positions of these nodes and redraw the DrawingArea in a loop.
The resulting animation is fast, but not smooth (there's a lot of flicker). This is probably because I fill the DrawingArea with a solid color before each frame.
How do you think I can best tackle this problem? Should I pre-render the frames onto Pixbufs? Is there a better solution?
Here's my current drawing code (using PyGTK):
rect = self.drawing_area.get_allocation()
style = self.drawing_area.get_style()
pos = [n.position_at(self.t) for n in self.nodes]
self.drawing_area.window.draw_rectangle(style.bg_gc[gtk.STATE_NORMAL], True,
0, 0, rect.width, rect.height)
for p in pos:
self.drawing_area.window.draw_arc(style.fg_gc[gtk.STATE_NORMAL], True,
rect.width * (p.x / 2400.0) - NODE_SIZE/2,
rect.height * (p.y / 2400.0) - NODE_SIZE/2,
NODE_SIZE, NODE_SIZE,
0, 64 * 360)
where self.t is the current time, which is incremented in the loop.
I changed my code to render the frames onto a Pixmap, and replaced the DrawingArea with an Image.
While this solved the flickering, now the CPU usage has peaked. The animation is still quite fast, but I don't think this method is scalable.
Time for some optimization, I guess.
UPDATE: It turns out using expose-event with an Image wasn't such a good idea. CPU usage is back to normal.
About the expose-event handling, check out the first paragraph on Animations with Cairo + Gtk:P
Multi-threaded Animation with Cairo and GTK+
Complex animations with
cairo and GTK+ can result in a laggy interface. This is because the
gtk_main() thread runs in a single loop. So, if your do_draw()
function implements a complicated drawing command, and it is called
from the gtk_main() thread (say by an on_window_expose_event()
function), the rest of your gtk code will be blocked until the
do_draw() function finishes. Consequentially, menu items, mouse
clicks, and even close button events will be slow to be processed and
your interface will feel laggy.
One solution is to hand off all the processor-intensive drawing to a
separate thread, thus freeing the gtk_main() thread to respond to
events.
http://cairographics.org/threaded_animation_with_cairo/

Resources