So I'm making a game that depends greatly on time and how fast a person responds and I noticed a little bug where it has a randomized time. For example: I have the method
wait pick random .01 to 3 seconds
set ghost effect to 0
reset timer
repeat 5
wait .05 seconds
change ghost effect by 20
and each time I run this, I get different times. It can't be the fact that I'm randomizing the wait time because the reset timer method is after that block. I ran some tests and I concluded that 7 out of the 12 times i got 0.8 seconds, which is what I'm trying to get. 3 out of the 12 times, I got 0.7, and 2 out of the 12 times I got 0.6. If there is any way to make the timer more accurate or improve my code to reduce lag, it would be much appreciated.
One general solution would be to make how ghosted the sprite is be a "function" based on how long since the animation started. This would work something like this:
This animation animates the sprite changing all the way around the color effect wheel across four seconds. It'll be most clear how that works if we go over it step by step.
We want our animation to last for four seconds. But how do we actually make it last that long? As was seen in your script, just using a "repeat (40): wait 0.1 seconds" doesn't always result in waiting exactly four seconds. Instead, we use a "repeat until" loop: "repeat until (timer - animation start) > 4".
We get the "animation start" variable just by setting it to the current timer value when we start the animation. We'll see "timer - animation start" again later in; what it really means is the progress through the animation: "timer - animation start" starts at zero and gradually increases to four as the animation runs. (Of course, when it reaches 4, we want to stop the animation, and that's why we use the "repeat until" block.)
Here's the big question: how, given the current amount of time through the animation, can we decide what the color effect should be? It turns out, that's not so hard to answer, but we do need to think through it since it takes math. We want to transition from 0 to 200 over a period of 4 seconds. You can write that down as a rate: 200 units per 4 seconds, so, 200 / 4. Then we just multiply that rate by how far through the animation we are: (200 / 4 * progress). Progress is easy to get, again; we just reuse the "timer - animation start" blocks.
Do you believe that I'm right? Here's a list with some numbers to convince you (but really, you should try out this script yourself!):
0s: (timer - animation start) = 0, color effect = (200/4) * 0 = 0. This is the starting point of the animation, so it makes sense for the color effect to be zero.
1s: (timer - animation start) = 1, color effect = (200/4) * 1 = 50.
2s: (timer - animation start) = 1, color effect = (200/4) * 2 = 100. Half way through the 4-second long animation: since we're transitioning from 0 to 200, it makes sense to now be at 100.
3.5s: (timer - animation start) = 3.5, color effect = (200/4) * 3.5 = 175.
4s: (timer - animation start) = 4, color effect = (200/4) * 4 = 200. Now that we're at the end, we've transitioned all the way to 200.
To try this yourself, I do recommend implementing some "artificial lag". That just means adding a "wait (random 0.1 - 0.3) seconds" block to simulate the lag that might show up in a very complex project or on a slow computer.
Since we're just dealing with a basic math formula, it's very easy to change the numbers to get a different result. Here is a script which transitions from 0 to 100 over 2 seconds:
But here's a point where you might find a "gotcha" -- look at what happens if you add artificial lag:
The cat turns ghost-y.. but doesn't ENTIRELY disappear! Yikes! So, what caused this?
Here's the problem: The animation stops before (timer - animation start) is exactly 2 seconds. So, we never run that 2s step, where the ghost effect would be 100 - and we're left with a sprite that is not entirely ghosted.
Luckily, the solution is simple. Just make sure you additionally switch to the final state after the animation has ended. Of course, that just means attaching the appropriate "set effect" block right after the "repeat until" loop:
Now the sprite's ghost effect will be set to 100 immediately after the loop, regardless of whatever it ended with.
By the way, when you tested these scripts out for yourself - you did, didn't you? - did you notice that this animates very smoothly? In fact, it animates as smoothly as possible. It will always animate at the top framerate that the user's computer can handle, since it runs every Scratch frame (there is no "wait n seconds" block)! Also, a simple test of your understanding - how can you re-implement the "glide (n) seconds to (x) (y)" block using this? It's definitely possible!
Related
I am currently working with painting and displaying on a Cartesian coordinate system. In my game, I have fast moving objects, bullets, for which I use the following formula to determine position:
x += speed * Cos(theta);
y += speed * Sin(theta);
where theta is a radian measure and speed modifies the speed at the cost of overall continuity. [lim speed → ∞] then x and y = larger "jump" between the starting and next calculated x,y point.
I had to use this formula with a 'high speed' object, so instead of using a timer, which is limited to milisecond .001, I utilized a while loop:
while(true) {
if(currentTime - oldTime > setInterval) //x,y and intersection operations {
//operations
} if(currentTime - oldTime > setInterval) //paint operations {
//operations
}
sleep(0,nanoseconds);//sleeps thread or if you're a C kind of guy, "task"
}
I want x,y and intersection operations to happen at a much faster rate than the paint event, which I plan to have occur at 30-125 times a second (basically the hertage of a monitor).
Actual Questions:
What would be the most efficient rate for the x,y and intersection operations, so that they would perform at a rate consistent across different CPUs (from a dusty single core # 1.6 ghz to a fancy shmancy hex-core # 4.0 ghz)?
Is there a better angle position formula than mine for these operations?
*note my method of painting the object has nothing to do with my problems, in case you were wondering.
Have a timer fire every time the screen refreshes (60Hz?). In that time you calculate where the object is at this point in time. You draw the object at the determined location.
Whenever you want to find out where the object currently is, you run the physics simulation until time has caught up with the point in time you want to render. This way you get the object being animated in exactly the point in time it should be in.
Define the frequency at which the physics simulation runs. You can pick 60Hz as well or any integer multiple of it. Run the physics engine with the same time increment (which is 1/Frequency). When you want to render, find out how many physics ticks are missing and run them one by one.
This scheme is completely robust against missing or superfluous timer ticks. CPU clock rate does not matter either. The object is always rendered at the price position it should be in.
When creating a game with any programming language (that can), it is important to have a fixed target frame rate the game will redraw the screen at, however some languages either do not have a sync function or timers are unreliable so is there any method of keeping the frame rate steady manually with only math and/or by sleeping the thread? Maybe using a frame delta?
So far I have only tried 'sleep(targetframerate - (targetframerate-delta))'
This is supposed to realise that the previous frame took longer than the target so then compensates by making the next frame sooner, however it effects itself and simply kills the frame rate reverse exponentially.
These built-in sync functions must be using some sort of math in a method like this to steady the frame rate. How is it done in high-end APIs such as OpenGL?
Create a timer that runs very quickly such as every millisecond (explained why later) and declare these three variables:
private int targetMillis = 1000/60,
lastTime = (int)System.currentTimeMillis(),
targetTime = lastTime+targetMillis;
targetMilis is the desired amount of milliseconds between each frame. Change 60 to desired frame rate.
lastTime is simply when the last frame was, to compare how long its been. Set to now.
targetTime is what time the next frame is due. Now + targetMillis.
An optional timerScaler can be added also, to scale any movements so they don't slow down because the frame rate has with:
public static float timeScaler = 1;
Then on each tick of the timer, run this code that will check if it's time for the next frame and to set up the next - taking into account if the frame is late and making the next one sooner appropriately.
int current = (int)System.currentTimeMillis(); // Now time
if (current < targetTime) return; // Stop here if its not time for the next frame
timeScaler = (float)targetMillis/(current-lastTime);
//Scale game on how late frame is.
lastTime = current;
targetTime = (current+targetMillis)-(current-targetTime);
// Create next frame where it should be (in targetMillis) and subtract if frame was late.
[Game code here.]
One would assume if we needed a frame every targetMillis we can just create a timer for that time, however as you said they can be a few milliseconds out therefore if we still used this method and did offset the targetTime a few back it wouldn't matter as it would always overshoot, therefore the speed of the timer is more of a resolution of accuracy.
I am drawing about 12.000 objects to a JavaFX 2.2 Canvas. The GUI blocks for around 2 seconds, while measuring the execution time of my code claims less than half a second execution time for my code.
So I am wondering how the measured execution time of my code can be so far shorter than the time the GUI is blocked - I guess there's some kind of buffering on the Canvas, so things are first written to the buffer and processed later? So after ~0.5 seconds, everything I drew to the Canvas was only written to the buffer?
Assuming everything is buffered first leads to my next question: Isn't the drawing of the things in the buffer always done on the UI-Thread, so I can't optimize the timespan where the GUI is blocked through to drawing things from the buffer? Even if I would draw to the Canvas from the Application thread, when the buffer is still processed on the UI thread, then I won't get rid of this 1,5 seconds blocking of the GUI?
Thanks for any hint!
Update: Pseudocode:
long start = System.nanoTime();
// Using GraphicsContext, draw ~ 12.000 arc parts (GraphicsContext.drawArc method)
// to a Canvas
long end = System.nanoTime();
double elapsedTime = (end-start)/1000000000.0; //in seconds
System.out.println("elapsed time: " + elapsedTime); // something around 0.5, however the GUI hangs for around 2 seconds - where do the additional 1.5 seconds come from?
I am doing everything on the application thread, so I understand that the GUI hangs for the 0.5 seconds during which I am adding stuff to the Canvas. However I can't understand, why the GUI hangs for ~2 seconds, when my drawing to the Canvas finished after 0.5 seconds?
Say like it took a whole second for a character to jump in a game, how would a game developer go about keeping that jump time to 1 second if the FPS is either 10fps, 30fps, 100fps etc? - If you get me, how would you stop a game's fps affecting the gameplay speed basically.
I presume there's a certain method of doing this, so I was wondering what exactly it is?
Thankssss,
Alex!
Normally by using a timer to record how much time has passed since the last frame was rendered. There are many articles and samples on the subject available via Google:
Achieving Frame Rate Independent Game Movement
Constant game speed independent of variable FPS in OpenGL with GLUT?
Fixed time step vs Variable time step
Fix Your Timestep!
Of course if your FPS is allowed to be anything then you will end up with unrealistic simulations. For this reason there is the concept of a "fixed time step".
The "Fix Your Timestep!" (and the previous articles linked on that page) in particular is a good read on this subject.
Short answer of a large subject
I guess your game should place "animation" not determine by its frame sequences number but by the time delay from a reference...
1)example : 1 second jump with only 3 drawing ... should be considere draw#1 a t0 draw#2 if between t+0.25 and t+0.75 and draw#3 if between t+0.75 and t+1
2) example : if your move/animation is determined by a formula like positionX(int RelativeFrameNumber) your should consider change your fonction by using time like positionX(long relativeTimeInMillisecond)
or with small change in your gameloop
3) place a "wait" code in your loop that is calibrate depending a continuously/fixed computed framerate performance
Hope that help
Many physics engines pass around a delta time in an update() method of some kind.
void update(float dt)
This delta value represents the current frame step proportional to a fixed frame rate (say, 60fps). For example, if dt is 1.0, then we're at 60fps, if dt is 2.0, then we're 30fps and if dt is 0.5 then we are at 120fps.. etc..
To move (in your case, jump) a character at the same speed for any frame rate, multiply dt by the object's velocity vector to keep the character jumping at the same speed.
void update(float dt)
{
myChar.Vel += myChar.Direction.Normalized() * myChar.Speed * dt;
//myChar.Speed is in meters per second
}
Note: Different calculation is required for quadratic physics.
Say I want to animate a ball rolling 1000 pixels to the right, specifying a timing function in the process – something like this:
UIView *ball = [[UIView alloc] initWithFrame:CGRectMake(0,0,30,30)];
CABasicAnimation* anim =
[CABasicAnimation animationWithKeyPath:#"transform.translation.x"];
anim.toValue = [NSNumber numberWithFloat:ball.frame.origin.x + 1000.0];
// move 1000 pixels to the right
anim.duration = 10.0;
anim.timingFunction = [CAMediaTimingFunction functionWithControlPoints:
0.1 :0.0 :0.3 :1.0]; // accelerate fast, decelerate slowly
[ball.layer addAnimation:anim forKey:#"myMoveRightAnim"];
What I ultimately want is to have a method, say -(void)animationProgressCallback:(float)progress, be called during the animation, in regular intervals of the animation's progress in terms of the absolute "distance" between start and end values, i.e. ignoring the timing function.
I'll try to explain with the above example with the ball rolling 1000px to the right (charted by the y axis, in our case 100%=1000px):
I want my callback method to be invoked whenever the ball has progressed 250 pixels. Because of the timing function, the first 250 pixels might be reached in ti0=2 seconds, half the total distance reached just ti1= 0.7 seconds later (fast acceleration kicks in), the 750px mark another ti2= 1.1 seconds later, and needing the remaining ti3= 5.2 seconds to reach the 100% (1000px) mark.
What would be great, but isn't provided:
If the animation called a delegate method in animation-progress intervals as described, I wouldn't need to ask this question… ;-)
Ideas how to solve the problem:
One solution I can think of is to calculate the bezier curve's values, map that to the tik values (we know the total animation duration), and when the animation is started, we sequentially perform our animationProgresssCallback: selector with those delays manually.
Obviously, this is insane (calculating bezier curves manually??) and, more importantly, unreliable (we can't rely on the animation thread and the main thread to be in sync – or can we?).
Any ideas??
Looking forward to your ideas!
Ideas how to solve the problem:
One solution I can think of is to
calculate the bezier curve's values,
map that to the tik values (we know
the total animation duration), and
when the animation is started, we
sequentially perform our
animationProgresssCallback: selector
with those delays manually.
Obviously, this is insane (calculating
bezier curves manually??) and, more
importantly, unreliable (we can't rely
on the animation thread and the main
thread to be in sync – or can we?).
Actually this is reliable. CoreAnimation is time based, so you could use the delegate to be notified when the animation really starts.
And about calculating the bezier path... well look it this way: It could be worse if you would want to implement a surface in OpenGLES you would have to calculate a Cubic Bezier!. lol. Your case is only one dimension, is not that hard if you know the maths.