Millisecond accuracy of ActionScript new Date() or getTimer() - time

I'd like to measure the reaction time of a user. In this example, I'm using actionscript, but the concept is really what is important, so feel free to answer in your language of choice, if you want to show any code.
The user sits in front of a screen and will be presented with a red dot. When they see the red dot, they hit the space bar.
My logic is as follows: make red dot visible, create a new date, wait for spacebar, create a new date, find the difference in milliseconds using a TimeSpan object.
//listen for the keystroke
this.systemManager.stage.addEventListener(KeyboardEvent.KEY_DOWN, catchSpace, true, 1);
...
if (e.keyCode == Keyboard.SPACE) {
e.preventDefault();
this.dispatchEvent(new PvtEvent(PvtEvent.BTN_CLICK));
}
//show the red dot, making note of the time
redDot.visible = true;
this.startCount=new Date();
//user clicks the space bar
this.endCount=new Date();
var timeSpan:Number=TimeSpan.fromDates(this.startCount, this.endCount).totalMilliseconds;
I feel like this should work, but I'm getting some values that are disconcerting. Here is a typical result set:
[254, 294, 296, 305, 306, 307, 308, 309, 310, 308, 312, 308, 338, 346, 364, 370, 380, 387, 395, 402, 427]
Notice that some of the values are close, and 308 is recorded multiple times. So, my questions are as follows:
Is my code, or the logic I'm using, flawed in some way?
What is the probability that the user is able to produce repeat times?
If the probability is low, then what am I missing here?
I should also note that I have (quite accidentally) received a 12ms response time. I was testing the app, and happen to hit the space bar just as the red dot appeared. So, I am doubting that my code cannot judge accurate time, at least to an accuracy of ±12ms :) .

I would suppose that reaction time have somewhat normal distribution, so it might be the case that some results are more likely to occur several times. Your reaction times are from 254 to 427, that is 174 possible different results. so question is in x tests, how likely is it that in x tests, some are the same? since it is probably normaly distributed this increases.
If you run it on your computer, then remember other applications/threads interact with the CPU. Further, some latency in the OS, and if you connect via USB or PS/2 (USB-device/hub is polled, while PS/2 is direct to the IRQ)

No, the logic seems fine. This is a perfectly simple way to measure time to the ms.
Turns out human beings and computers can seldom do anything to millisecond accuracy.
The thing I'm tripping on is Flash!
After a few months of on and off testing, we figured out the issue; the language. From the ASDOC on the flex Timer:
A delay lower than 20 milliseconds is not recommended. Timer frequency
is limited to 60 frames per second, meaning a delay lower than 16.6
milliseconds causes runtime problems.
Flash runs with a frame rate of 60 FPS. I guess this means that if you try to measure time, and want to be accurate to less the 16 ms, you are out of luck. However this does explain why I would see repeating values, as anything in this "60 FPS window" was just being measured as the same time.

Related

Handle Position Update in "real-time"

How long, in real life, is equivalent to 1 second of Veins Handle Position Update?
In the file omnet.ini I defined the simulation times and the handle position update as 100s and 1s, respectively. However, I'm not sure how much this would be worth in real life, since simulation time and real time don't add up.

How to find period in time series – with fuzziness?

Given a set of timestamps, I want to find a periodic grid most of them fall into.
Example set that falls into a grid with a period of 30:
10, 40, 72, 99, 164, 172, 190
Three fuzziness parameters here:
small deviation (72->70, 164->160) is acceptable;
some amount of samples (172) fall out of the grid. Their acceptable percentage is also set by some parameter;
sample around 130 is skipped, but is also ok. Acceptable amount of "holes" can also be somehow set.
Looking at intervals between the samples, I can think of searching for the greatest common divisor (GCD), again, with some fuzziness.
Please advice an approach to this problem.

JavaFX Canvas delay

I'm trying to convert some Java2D code to JavaFX and I'm stuck with an issue regarding the performance of the JavaFX Canvas. At some point, I'll have to draw thousands of small circles on the screen.
My problem is that in the first drawing, my code takes a lot of time to execute. But if I have to perform a second drawing, it takes only a fraction of the time to draw (it is at least 10 times faster).
Is there anything I'm doing wrong? Is there any way to prevent that initial delay?
I wrote this code to test it. In this code I draw 500,000 circles at random positions on a 1000 x 1000 canvas (built previously). I linked this code to a button click event, and on the first time I click it takes 10 seconds to execute. But if I just click again, it takes only 0.025 seconds.
private void paintCanvas() {
long initTime = System.currentTimeMillis();
GraphicsContext cg = canvas.getGraphicsContext2D();
cg.setFill(Color.WHITE);
cg.fillRect(0, 0, canvas.getWidth(), canvas.getHeight());
cg.setFill(Color.rgb(0, 0, 0, 0.1));
Random rand = new Random();
for (int i = 0; i < 500000; i++) {
cg.fillOval(1000 * rand.nextFloat(), 1000 * rand.nextFloat(), 2, 2);
}
long endTime = System.currentTimeMillis();
System.out.println("Time spent on drawing:" + (endTime - initTime)/1000.0f);
}
Actually there is no max number of new elements. It can vary from some hundreds to hundreds of thousands, depending of the users needs. And yes, it is ok if some elements pop in over time.
Guys I thank you for all help. I sent the same question to the OpenJFX mailing list and one of the developers answered. It seems that my JavaFX 2.2 version still use a old model for growing the command buffer. The new version, JavaFX 8, uses a more efficient model which makes the first painting as fast as the subsequent ones.
Here is the answer I got:
Jim Graham (james.graham at oracle.com)
Mon May 12 21:17:19 UTC 2014
This is likely due to growing the command buffer which was done
linearly at one point (probably still done that way in 2.2), but is
now exponential in 8.0. The first render time is nearly
instantaneous in
8.0, but takes a long time as you found when I try it with one of my old
2.x builds...
...jim
I can think of a couple things but let's start with one:
It could be that the JVM Just in Time compiler is hitting your execution. Depends on your JVM option (Whether it is client or server JIT, and whether you are using AggresiveOpts or not).
Remember, the JVM is smart enough to perform optimizations on that loop. In my opinion you can start there, put this on your JVM options when executing this: -XX:+PrintCompilation, and look at the output on console, your method should be compiled during the first execution and then you should not observe any compilations during the second. If this is so, then you know that this piece of code was compiled and stored in the CodeCache, and execution is not happening through the interpreter but through straight natively compiled code, which will have better performance.
Let us know your findings!
JVM options reference (might need to find your specific JVM doc):
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
P.S. can you try lowering the start time get right before you instantiate random?, it would be nice to be able to take two times, one at the beggining and right before the random, and the second, right after this last time and finally when the loop finished, the idea is to try and get a break down on where your code it's spending its time when you observe this (the loop, or the canvas instantiation).

Android: Counting time in my main loop

I'd like to a counter to run in the background while my game is being played so that when the player survives for say 1 minute, something happens.
Currently, I'm thinking of doing this by getting the system time at start (say in surfaceCreated) and then in my physics update method, getting the current time and comparing the two. When 60,000 ms have passed then I know obviously 1 minute has passed.
Is this the best way to do this? Or is there another/better/simpler way.
Thanks all
That's a good way to go about it.
There's not many other options. In my game, I have a stopwatch that stops resets right before the frame update, so I can determine the amount of time spent doing other things. Then, each object is stepped with this amount of time.
It's good because it allows me to move each object at a speed relative to time rather than framerate, so if framerate differs on a different device, things still move at the same speed.
If you're not doing this, you really should be. If your step function looks something like
object.x += speed.x;
then it really should be
object.x += speed.x (in units/s) * step_time (in ms) / 1000.0
If you are doing this, it's probably best just to make a timer object that runs a Runnable when the timer expires (i.e it sums up the step-times and fires when it reaches 60sec).

How do you separate game logic from display?

How can you make the display frames per second be independent from the game logic? That is so the game logic runs the same speed no matter how fast the video card can render.
I think the question reveals a bit of misunderstanding of how game engines should be designed. Which is perfectly ok, because they are damn complex things that are difficult to get right ;)
You are under the correct impression that you want what is called Frame Rate Independence. But this does not only refer to Rendering Frames.
A Frame in single threaded game engines is commonly referred to as a Tick. Every Tick you process input, process game logic, and render a frame based off of the results of the processing.
What you want to do is be able to process your game logic at any FPS (Frames Per Second) and have a deterministic result.
This becomes a problem in the following case:
Check input:
- Input is key: 'W' which means we move the player character forward 10 units:
playerPosition += 10;
Now since you are doing this every frame, if you are running at 30 FPS you will move 300 units per second.
But if you are instead running at 10 FPS, you will only move 100 units per second. And thus your game logic is not Frame Rate Independent.
Happily, to solve this problem and make your game play logic Frame Rate Independent is a rather simple task.
First, you need a timer which will count the time each frame takes to render. This number in terms of seconds (so 0.001 seconds to complete a Tick) is then multiplied by what ever it is that you want to be Frame Rate Independent. So in this case:
When holding 'W'
playerPosition += 10 * frameTimeDelta;
(Delta is a fancy word for "Change In Something")
So your player will move some fraction of 10 in a single Tick, and after a full second of Ticks, you will have moved the full 10 units.
However, this will fall down when it comes to properties where the rate of change also changes over time, for example an accelerating vehicle. This can be resolved by using a more advanced integrator, such as "Verlet".
Multithreaded Approach
If you are still interested in an answer to your question (since I didn't answer it but presented an alternative), here it is. Separating Game Logic and Rendering into different threads. It has it's draw backs though. Enough so that the vast majority of Game Engines remain single threaded.
That's not to say there is only ever one thread running in so called single threaded engines. But all significant tasks are usually in one central thread. Some things like Collision Detection may be multithreaded, but generally the Collision phase of a Tick blocks until all the threads have returned, and the engine is back to a single thread of execution.
Multithreading presents a whole, very large class of issues, even some performance ones since everything, even containers, must be thread safe. And Game Engines are very complex programs to begin with, so it is rarely worth the added complication of multithreading them.
Fixed Time Step Approach
Lastly, as another commenter noted, having a Fixed size time step, and controlling how often you "step" the game logic can also be a very effective way of handling this with many benefits.
Linked here for completeness, but the other commenter also links to it:
Fix Your Time Step
Koen Witters has a very detailed article about different game loop setups.
He covers:
FPS dependent on Constant Game Speed
Game Speed dependent on Variable FPS
Constant Game Speed with Maximum FPS
Constant Game Speed independent of Variable FPS
(These are the headings pulled from the article, in order of desirability.)
You could make your game loop look like:
int lastTime = GetCurrentTime();
while(1) {
// how long is it since we last updated?
int currentTime = GetCurrentTime();
int dt = currentTime - lastTime;
lastTime = currentTime;
// now do the game logic
Update(dt);
// and you can render
Draw();
}
Then you just have to write your Update() function to take into account the time differential; e.g., if you've got an object moving at some speed v, then update its position by v * dt every frame.
There was an excellent article on flipcode about this back in the day. I would like to dig it up and present it for you.
http://www.flipcode.com/archives/Main_Loop_with_Fixed_Time_Steps.shtml
It's a nicely thought out loop for running a game:
Single threaded
At a fixed game clock
With graphics as fast as possible using an interpolated clock
Well, at least that's what I think it is. :-) Too bad the discussion that pursued after this posting is harder to find. Perhaps the wayback machine can help there.
time0 = getTickCount();
do
{
time1 = getTickCount();
frameTime = 0;
int numLoops = 0;
while ((time1 - time0) TICK_TIME && numLoops < MAX_LOOPS)
{
GameTickRun();
time0 += TICK_TIME;
frameTime += TICK_TIME;
numLoops++;
// Could this be a good idea? We're not doing it, anyway.
// time1 = getTickCount();
}
IndependentTickRun(frameTime);
// If playing solo and game logic takes way too long, discard pending
time.
if (!bNetworkGame && (time1 - time0) TICK_TIME)
time0 = time1 - TICK_TIME;
if (canRender)
{
// Account for numLoops overflow causing percent 1.
float percentWithinTick = Min(1.f, float(time1 - time0)/TICK_TIME);
GameDrawWithInterpolation(percentWithinTick);
}
}
while (!bGameDone);
Enginuity has a slightly different, but interesting approach: the Task Pool.
Single-threaded solutions with time delays before displaying graphics are fine, but I think the progressive way is to run game logic in one thread, and displaying in other thread.
But you should synchronize threads right way ;) It'll take a long time to implement, so if your game is not too big, single-threaded solution will be fine.
Also, extracting GUI into separate thread seems to be great approach. Have you ever seen "Mission complete" pop-up message while units are moving around in RTS games? That's what I'm talking about :)
This doesn' cover the higher program abstraction stuff, i.e. state machines etc.
It's fine to control movement and acceleration by adjusting those with your frame time lapse.
But how about stuff like triggering a sound 2.55 seconds after this or that, or changing
game level 18.25 secs later, etc.
That can be tied up to an elapsed frame time accumulator (counter), BUT these timings can
get screwed up if your frame rate falls below your state script resolution
i.e if your higher logic needs 0.05 sec granularity and you fall below 20fps.
Determinism can be kept if the game logic is run on a separate "thread"
(at the software level, which I would prefer for this, or OS level) with a fixed time-slice, independent of fps.
The penalty might be that you might waste cpu time in-between frames if not much is happening,
but I think it's probably worth it.
From my experience (not much) Jesse and Adam's answers should put you on the right track.
If you are after further information and insight into how this works, i found that the sample applications for TrueVision 3D were very useful.

Resources