Can a program calculate the complexity of an algorithm? - algorithm

Is there any way to compute the time complexity of an algorithm programatically? For example, how could I calculate the complexity of a fibonacci(n) function?

The undecidability of the halting problem says that you can't even tell if an algorithm terminates. I'm pretty sure from this it follows that you can't generally solve the complexity of an algorithm.

While it's impossible to do for all cases (unless you run your own code parser and just look at loops and what impacts on their values and such), it is still possible to do as a black box test with an upper bound time set. That is to say, have some variable that is set to determine that once a program's execution passes this time it's considered to be running forever.
From this your code would look similar to this (quick and dirty code sorry it's a little verbose and the math might be off for larger powers I haven't checked).
It can be improved upon by using a set array of input values rather than randomly generating some, and also by checking a wider range of values, you should be able to check any input versus any other two inputs and determine all the patterns of method duration.
I'm sure there are much better (namely more accurate) ways to calculate the O between a set of given numbers than shown here (which neglects to relate the run time between elements too much).
static void Main(string[] args)
{
var sw = new Stopwatch();
var inputTimes = new Dictionary<int, double>();
List<int> inputValues = new List<int>();
for (int i = 0; i < 25; i++)
{
inputValues.Add(i);
}
var ThreadTimeout = 10000;
for (int i = 0; i < inputValues.Count; i++)
{
int input = inputValues[i];
var WorkerThread = new Thread(t => CallMagicMethod(input)) { Name = "WorkerThread" };
sw.Reset();
Console.WriteLine("Input value '{0}' running...", input);
sw.Start();
WorkerThread.Start();
WorkerThread.Join(ThreadTimeout);
sw.Stop();
if (WorkerThread.IsAlive)
{
Console.WriteLine("Input value '{0}' exceeds timeout", input);
WorkerThread.Abort();
//break;
inputTimes.Add(input, double.MaxValue);
continue;
}
inputTimes.Add(input, sw.Elapsed.TotalMilliseconds);
Console.WriteLine("Input value '{0}' took {1}ms", input, sw.Elapsed.TotalMilliseconds);
}
List<int> indexes = inputTimes.Keys.OrderBy(k => k).ToList();
// calculate the difference between the values:
for (int i = 0; i < indexes.Count - 2; i++)
{
int index0 = indexes[i];
int index1 = indexes[i + 1];
if (!inputTimes.ContainsKey(index1))
{
continue;
}
int index2 = indexes[i + 2];
if (!inputTimes.ContainsKey(index2))
{
continue;
}
double[] runTimes = new double[] { inputTimes[index0], inputTimes[index1], inputTimes[index2] };
if (IsRoughlyEqual(runTimes[2], runTimes[1], runTimes[0]))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(1)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / Math.Log(index2, 2), runTimes[1] / Math.Log(index1, 2), runTimes[0] / Math.Log(index0, 2)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(log N)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / index2, runTimes[1] / index1, runTimes[0] / index0))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / (Math.Log(index2, 2) * index2), runTimes[1] / (Math.Log(index1, 2) * index1), runTimes[0] / (Math.Log(index0, 2) * index0)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N log N)", index0, index2);
}
else
{
for (int pow = 2; pow <= 10; pow++)
{
if (IsRoughlyEqual(runTimes[2] / Math.Pow(index2, pow), runTimes[1] / Math.Pow(index1, pow), runTimes[0] / Math.Pow(index0, pow)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N^{2})", index0, index2, pow);
break;
}
else if (pow == 10)
{
Console.WriteLine("Execution time for input = {0} to {1} is greater than O(N^10)", index0, index2);
}
}
}
}
Console.WriteLine("Fin.");
}
private static double variance = 0.02;
public static bool IsRoughlyEqual(double value, double lower, double upper)
{
//returns if the lower, value and upper are within a variance of the next value;
return IsBetween(lower, value * (1 - variance), value * (1 + variance)) &&
IsBetween(value, upper * (1 - variance), upper * (1 + variance));
}
public static bool IsBetween(double value, double lower, double upper)
{
//returns if the value is between the other 2 values +/- variance
lower = lower * (1 - variance);
upper = upper * (1 + variance);
return value > lower && value < upper;
}
public static void CallMagicMethod(int input)
{
try
{
MagicBox.MagicMethod(input);
}
catch (ThreadAbortException tae)
{
}
catch (Exception ex)
{
Console.WriteLine("Unexpected Exception Occured: {0}", ex.Message);
}
}
And an example output:
Input value '59' running...
Input value '59' took 1711.8416ms
Input value '14' running...
Input value '14' took 90.9222ms
Input value '43' running...
Input value '43' took 902.7444ms
Input value '22' running...
Input value '22' took 231.5498ms
Input value '50' running...
Input value '50' took 1224.761ms
Input value '27' running...
Input value '27' took 351.3938ms
Input value '5' running...
Input value '5' took 9.8048ms
Input value '28' running...
Input value '28' took 377.8156ms
Input value '26' running...
Input value '26' took 325.4898ms
Input value '46' running...
Input value '46' took 1035.6526ms
Execution time for input = 5 to 22 is greater than O(N^10)
Execution time for input = 14 to 26 is roughly O(N^2)
Execution time for input = 22 to 27 is roughly O(N^2)
Execution time for input = 26 to 28 is roughly O(N^2)
Execution time for input = 27 to 43 is roughly O(N^2)
Execution time for input = 28 to 46 is roughly O(N^2)
Execution time for input = 43 to 50 is roughly O(N^2)
Execution time for input = 46 to 59 is roughly O(N^2)
Fin.
Which shows the magic method is likely O(N^2) for the given inputs +/- 2% variance
and another result here:
Input value '0' took 0.7498ms
Input value '1' took 0.3062ms
Input value '2' took 0.5038ms
Input value '3' took 4.9239ms
Input value '4' took 14.2928ms
Input value '5' took 29.9069ms
Input value '6' took 55.4424ms
Input value '7' took 91.6886ms
Input value '8' took 140.5015ms
Input value '9' took 204.5546ms
Input value '10' took 285.4843ms
Input value '11' took 385.7506ms
Input value '12' took 506.8602ms
Input value '13' took 650.7438ms
Input value '14' took 819.8519ms
Input value '15' took 1015.8124ms
Execution time for input = 0 to 2 is greater than O(N^10)
Execution time for input = 1 to 3 is greater than O(N^10)
Execution time for input = 2 to 4 is greater than O(N^10)
Execution time for input = 3 to 5 is greater than O(N^10)
Execution time for input = 4 to 6 is greater than O(N^10)
Execution time for input = 5 to 7 is greater than O(N^10)
Execution time for input = 6 to 8 is greater than O(N^10)
Execution time for input = 7 to 9 is greater than O(N^10)
Execution time for input = 8 to 10 is roughly O(N^3)
Execution time for input = 9 to 11 is roughly O(N^3)
Execution time for input = 10 to 12 is roughly O(N^3)
Execution time for input = 11 to 13 is roughly O(N^3)
Execution time for input = 12 to 14 is roughly O(N^3)
Execution time for input = 13 to 15 is roughly O(N^3)
Which shows the magic method is likely O(N^3) for the given inputs +/- 2% variance
So It is possible to programatically determine the complexity of an algorithm, you need to make sure that you do not introduce some additional work which causes it to be longer than you think (such as building all the input for the function before you start timing it).
Further to this you also need to remember that this is going to take a significant time to try a large series of possible values and return how long it took, a more realistic test is to just call your function at a large realistic upper bound value and determine if it's response time is sufficient for your usage.
You likely would only need to do this if you are performing black box testing without source code (and can't use something like Reflector to view the source), or if you have to prove to a PHB that the coded algorithms are as fast as it can be (ignoring improvements to constants), as you claim it is.

Not in general. If the algorithm consists of nested simple for loops, e.g.
for (int i=a; i<b; ++i)
then you know this will contribute (b-a) steps. Now, if either b or a or both depends on n, then you can get a complexity from that. However, if you have something more exotic, like
for (int i=a; i<b; i=whackyFunction(i))
then you really need to understand what whackyFunction(i) does.
Similarly, break statements may screw this up, and while statements may be a lost cause since it's possible you wouldn't even be able to tell if the loop terminated.

Count arithmetic operations, memory accesses and memory space used inside fibbonacci() or whatever it is, measure its execution time. Do this with different inputs, see the emerging trends, the asymptotic behavior.

General measures like cyclomatic complexity are useful in giving you an idea of the more complex portions of your code, but it is a relatively simple mechanism.

Related

altering Newton`s cooling example in Dymola to show sinusoidal behavior

I am trying to alter the Newton cooling problem (link: https://mbe.modelica.university/behavior/equations/physical/#physical-types) so that :
1) T_inf is 300K for the first 5 seconds
2) At T=5, I switch it to sinusoidal wave with T_inf having an average value of 400 K, peak to peak amplitude of 50 K and period of 10 seconds
3) At T=85s, I want to change the period of the sine wave to 0.01 seconds, keeping everything else the same. Simulation has to end in 100s
I am successful in parts 1 and 2, but part 3 isn't running for me.
My code is below.
model MAE5833_Saleem_NewtonCooling_HW2_default
// Types
type Temperature = Real (unit="K", min=0);
type ConvectionCoefficient = Real (unit="W/(m2.K)", min=0);
type Area = Real (unit="m2", min=0);
type Mass = Real (unit="kg", min=0);
type SpecificHeat = Real (unit="J/(K.kg)", min=0);
// Parameters
parameter Temperature T0=400 "Initial temperature";
parameter ConvectionCoefficient h=0.7 "Convective cooling coefficient";
parameter Area A=1.0 "Surface area";
parameter Mass m=0.1 "Mass of thermal capacitance";
parameter SpecificHeat c_p=1.2 "Specific heat";
parameter Real freqHz=0.1 "Frequency of sine wave in from 5 to 85 seconds";
parameter Real freq2=100 "Time period of 0.01s after 85 seconds";
parameter Real amplitude=25 "Peak to peak of 50K";
parameter Real starttime=5;
parameter Real T_init=300;
parameter Real T_new=400;
Temperature T "Temperature";
Temperature T_inf;
initial equation
T = T0 "Specify initial value for T";
equation
m*c_p*der(T) = h*A*(T_inf - T) "Newton's law of cooling";
algorithm
when {time > starttime,time < 85} then
T_inf := (T_new - T_init) + amplitude*Modelica.Math.sin(2*3.14*freqHz*(time - starttime));
elsewhen time > 85 then
T_inf := (T_new - T_init) + amplitude*Modelica.Math.sin(2*3.14*freq2*(time - starttime));
elsewhen time < starttime then
T_inf := T_init;
end when;
annotation (experiment(
StopTime=100,
Interval=0.001,
__Dymola_Algorithm="Rkfix2"));
end MAE5833_Saleem_NewtonCooling_HW2_default;
You have to use an if statement in this case instead of when.
Here is the updated equation section, with some further suggestions below:
equation
m*c_p*der(T) = h*A*(T_inf - T) "Newton's law of cooling";
if time >= starttime and time < 85 then
T_inf = (T_new - T_init) + amplitude*sin(2*Modelica.Constants.pi*freqHz*(time - starttime));
elseif time >= 85 then
T_inf = (T_new - T_init) + amplitude*sin(2*Modelica.Constants.pi*freq2*(time - starttime));
else
T_inf = T_init;
end if;
you can use sin instead of Modelica.Math.sin, as the function is built in
use Modelica.Constants.pi instead of defining pi yourself
I have merged your algorithm into the equation section. Don't use an algorithm section unless there is a very good reason to do so.

Cyclomatic Complexity number - do I have to count every statement separately as node?

I came across different ways of calculating CCN (according to formula CCN = E-N+2P)
One way was to count all the lines in the code separately and the other way is to count a few lines of code as one step; lets have the following example:
1 public class SumAndAverage {
2
3 public static void main (String[] args) {
4 int sum = 0;
5 double average = 0.0;
6 String message = "";
7
8 int num = Integer.parseInt(args[0]);
9
10 if ((num < 1) || (num > 100)) {
11 message = "Invalid number entered.";
12 } else {
13 for (int i = 1; i <= num; i++) {
14 sum += i;
15 }
16 average = (double) sum / num;
17 message = "The sum is " + sum + " and the average is " + average;
18 }
19 System.out.println(message);
20 }
21}
Counting every statement we'd get 12 - 11 + 2x 1 = 3
I was wondering if I "join" lines 4,5,6,8 and count them as one step and do the same with line 16 and 17, would that be correct too? The result would be the same as no of edges would also decrease: 8 - 7 + 2*1 = 3
The right way to calculate complexity is by considering blocks of code. A block of code is where there is no chance of dissecting the execution path.
McCabe's paper mentions the below:
The tool, FLOW, was written in APL to input the source code from Fortran files on disk. FLOW would then break a Fortran job into distinct subroutines and analyze the control structure of each subroutine. It does this by breaking the Fortran subroutines into blocks that are delimited by statements that affect control flow: IF, GOTO ,referenced LABELS, DO, etc.
For other information on complexity, also read through Cyclomatic complexity as a Quality measure

Efficient way to generate a seemingly random permutation from a very large set without repeating?

I have a very large set (billions or more, it's expected to grow exponentially to some level), and I want to generate seemingly random elements from it without repeating. I know I can pick a random number and repeat and record the elements I have generated, but that takes more and more memory as numbers are generated, and wouldn't be practical after couple millions elements out.
I mean, I could say 1, 2, 3 up to billions and each would be constant time without remembering all the previous, or I can say 1,3,5,7,9 and on then 2,4,6,8,10, but is there a more sophisticated way to do that and eventually get a seemingly random permutation of that set?
Update
1, The set does not change size in the generation process. I meant when the user's input increases linearly, the size of the set increases exponentially.
2, In short, the set is like the set of every integer from 1 to 10 billions or more.
3, In long, it goes up to 10 billion because each element carries the information of many independent choices, for example. Imagine an RPG character that have 10 attributes, each can go from 1 to 100 (for my problem different choices can have different ranges), thus there's 10^20 possible characters, number "10873456879326587345" would correspond to a character that have "11, 88, 35...", and I would like an algorithm to generate them one by one without repeating, but makes it looks random.
Thanks for the interesting question. You can create a "pseudorandom"* (cyclic) permutation with a few bytes using modular exponentiation. Say we have n elements. Search for a prime p that's bigger than n+1. Then find a primitive root g modulo p. Basically by definition of primitive root, the action x --> (g * x) % p is a cyclic permutation of {1, ..., p-1}. And so x --> ((g * (x+1))%p) - 1 is a cyclic permutation of {0, ..., p-2}. We can get a cyclic permutation of {0, ..., n-1} by repeating the previous permutation if it gives a value bigger (or equal) n.
I implemented this idea as a Go package. https://github.com/bwesterb/powercycle
package main
import (
"fmt"
"github.com/bwesterb/powercycle"
)
func main() {
var x uint64
cycle := powercycle.New(10)
for i := 0; i < 10; i++ {
fmt.Println(x)
x = cycle.Apply(x)
}
}
This outputs something like
0
6
4
1
2
9
3
5
8
7
but that might vary off course depending on the generator chosen.
It's fast, but not super-fast: on my five year old i7 it takes less than 210ns to compute one application of a cycle on 1000000000000000 elements. More details:
BenchmarkNew10-8 1000000 1328 ns/op
BenchmarkNew1000-8 500000 2566 ns/op
BenchmarkNew1000000-8 50000 25893 ns/op
BenchmarkNew1000000000-8 200000 7589 ns/op
BenchmarkNew1000000000000-8 2000 648785 ns/op
BenchmarkApply10-8 10000000 170 ns/op
BenchmarkApply1000-8 10000000 173 ns/op
BenchmarkApply1000000-8 10000000 172 ns/op
BenchmarkApply1000000000-8 10000000 169 ns/op
BenchmarkApply1000000000000-8 10000000 201 ns/op
BenchmarkApply1000000000000000-8 10000000 204 ns/op
Why did I say "pseudorandom"? Well, we are always creating a very specific kind of cycle: namely one that uses modular exponentiation. It looks pretty pseudorandom though.
I would use a random number and swap it with an element at the beginning of the set.
Here's some pseudo code
set = [1, 2, 3, 4, 5, 6]
picked = 0
Function PickNext(set, picked)
If picked > Len(set) - 1 Then
Return Nothing
End If
// random number between picked (inclusive) and length (exclusive)
r = RandomInt(picked, Len(set))
// swap the picked element to the beginning of the set
result = set[r]
set[r] = set[picked]
set[picked] = result
// update picked
picked++
// return your next random element
Return temp
End Function
Every time you pick an element there is one swap and the only extra memory being used is the picked variable. The swap can happen if the elements are in a database or in memory.
EDIT Here's a jsfiddle of a working implementation http://jsfiddle.net/sun8rw4d/
JavaScript
var set = [];
set.picked = 0;
function pickNext(set) {
if(set.picked > set.length - 1) { return null; }
var r = set.picked + Math.floor(Math.random() * (set.length - set.picked));
var result = set[r];
set[r] = set[set.picked];
set[set.picked] = result;
set.picked++;
return result;
}
// testing
for(var i=0; i<100; i++) {
set.push(i);
}
while(pickNext(set) !== null) { }
document.body.innerHTML += set.toString();
EDIT 2 Finally, a random binary walk of the set. This can be accomplished with O(Log2(N)) stack space (memory) which for 10billion is only 33. There's no shuffling or swapping involved. Using trinary instead of binary might yield even better pseudo random results.
// on the fly set generator
var count = 0;
var maxValue = 64;
function nextElement() {
// restart the generation
if(count == maxValue) {
count = 0;
}
return count++;
}
// code to pseudo randomly select elements
var current = 0;
var stack = [0, maxValue - 1];
function randomBinaryWalk() {
if(stack.length == 0) { return null; }
var high = stack.pop();
var low = stack.pop();
var mid = ((high + low) / 2) | 0;
// pseudo randomly choose the next path
if(Math.random() > 0.5) {
if(low <= mid - 1) {
stack.push(low);
stack.push(mid - 1);
}
if(mid + 1 <= high) {
stack.push(mid + 1);
stack.push(high);
}
} else {
if(mid + 1 <= high) {
stack.push(mid + 1);
stack.push(high);
}
if(low <= mid - 1) {
stack.push(low);
stack.push(mid - 1);
}
}
// how many elements to skip
var toMid = (current < mid ? mid - current : (maxValue - current) + mid);
// skip elements
for(var i = 0; i < toMid - 1; i++) {
nextElement();
}
current = mid;
// get result
return nextElement();
}
// test
var result;
var list = [];
do {
result = randomBinaryWalk();
list.push(result);
} while(result !== null);
document.body.innerHTML += '<br/>' + list.toString();
Here's the results from a couple of runs with a small set of 64 elements. JSFiddle http://jsfiddle.net/yooLjtgu/
30,46,38,34,36,35,37,32,33,31,42,40,41,39,44,45,43,54,50,52,53,51,48,47,49,58,60,59,61,62,56,57,55,14,22,18,20,19,21,16,15,17,26,28,29,27,24,25,23,6,2,4,5,3,0,1,63,10,8,7,9,12,11,13
30,14,22,18,16,15,17,20,19,21,26,28,29,27,24,23,25,6,10,8,7,9,12,13,11,2,0,63,1,4,5,3,46,38,42,44,45,43,40,41,39,34,36,35,37,32,31,33,54,58,56,55,57,60,59,61,62,50,48,49,47,52,51,53
As I mentioned in my comment, unless you have an efficient way to skip to a specific point in your "on the fly" generation of the set this will not be very efficient.
if it is enumerable then use a pseudo-random integer generator adjusted to the period 0 .. 2^n - 1 where the upper bound is just greater than the size of your set and generate pseudo-random integers discarding those more than the size of your set. Use those integers to index items from your set.
Pre- compute yourself a series of indices (e.g. in a file), which has the properties you need and then randomly choose a start index for your enumeration and use the series in a round-robin manner.
The length of your pre-computed series should be > the maximum size of the set.
If you combine this (depending on your programming language etc.) with file mappings, your final nextIndex(INOUT state) function is (nearly) as simple as return mappedIndices[state++ % PERIOD];, if you have a fixed size of each entry (e.g. 8 bytes -> uint64_t).
Of course, the returned value could be > your current set size. Simply draw indices until you get one which is <= your sets current size.
Update (In response to question-update):
There is another option to achieve your goal if it is about creating 10Billion unique characters in your RPG: Generate a GUID and write yourself a function which computes your number from the GUID. man uuid if you are are on a unix system. Else google it. Some parts of the uuid are not random but contain meta-info, some parts are either systematic (such as your network cards MAC address) or random, depending on generator algorithm. But they are very very most likely unique. So, whenever you need a new unique number, generate a uuid and transform it to your number by means of some algorithm which basically maps the uuid bytes to your number in a non-trivial way (e.g. use hash functions).

An interview question: About Probability

An interview question:
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1.
Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
My implementation is:
function g(x) = {
if (f(x) == 0){ // 1/4
var s = f(x)
if( s == 1) {// 3/4 * 1/4
return s // 3/16
} else {
g(x)
}
} else { // 3/4
var k = f(x)
if( k == 0) {// 1/4 * 3/4
return k // 3/16
} else {
g(x)
}
}
}
Am I right? What's your solution?(you can use any language)
If you call f(x) twice in a row, the following outcomes are possible (assuming that
successive calls to f(x) are independent, identically distributed trials):
00 (probability 1/4 * 1/4)
01 (probability 1/4 * 3/4)
10 (probability 3/4 * 1/4)
11 (probability 3/4 * 3/4)
01 and 10 occur with equal probability. So iterate until you get one of those
cases, then return 0 or 1 appropriately:
do
a=f(x); b=f(x);
while (a == b);
return a;
It might be tempting to call f(x) only once per iteration and keep track of the two
most recent values, but that won't work. Suppose the very first roll is 1,
with probability 3/4. You'd loop until the first 0, then return 1 (with probability 3/4).
The problem with your algorithm is that it repeats itself with high probability. My code:
function g(x) = {
var s = f(x) + f(x) + f(x);
// s = 0, probability: 1/64
// s = 1, probability: 9/64
// s = 2, probability: 27/64
// s = 3, probability: 27/64
if (s == 2) return 0;
if (s == 3) return 1;
return g(x); // probability to go into recursion = 10/64, with only 1 additional f(x) calculation
}
I've measured average number of times f(x) was calculated for your algorithm and for mine. For yours f(x) was calculated around 5.3 times per one g(x) calculation. With my algorithm this number reduced to around 3.5. The same is true for other answers so far since they are actually the same algorithm as you said.
P.S.: your definition doesn't mention 'random' at the moment, but probably it is assumed. See my other answer.
Your solution is correct, if somewhat inefficient and with more duplicated logic. Here is a Python implementation of the same algorithm in a cleaner form.
def g ():
while True:
a = f()
if a != f():
return a
If f() is expensive you'd want to get more sophisticated with using the match/mismatch information to try to return with fewer calls to it. Here is the most efficient possible solution.
def g ():
lower = 0.0
upper = 1.0
while True:
if 0.5 < lower:
return 1
elif upper < 0.5:
return 0
else:
middle = 0.25 * lower + 0.75 * upper
if 0 == f():
lower = middle
else:
upper = middle
This takes about 2.6 calls to g() on average.
The way that it works is this. We're trying to pick a random number from 0 to 1, but we happen to stop as soon as we know whether the number is 0 or 1. We start knowing that the number is in the interval (0, 1). 3/4 of the numbers are in the bottom 3/4 of the interval, and 1/4 are in the top 1/4 of the interval. We decide which based on a call to f(x). This means that we are now in a smaller interval.
If we wash, rinse, and repeat enough times we can determine our finite number as precisely as possible, and will have an absolutely equal probability of winding up in any region of the original interval. In particular we have an even probability of winding up bigger than or less than 0.5.
If you wanted you could repeat the idea to generate an endless stream of bits one by one. This is, in fact, provably the most efficient way of generating such a stream, and is the source of the idea of entropy in information theory.
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1
Taking this statement literally, f(x) if called four times will always return zero once and 1 3 times. This is different than saying f(x) is a probabalistic function and the 0 to 1 ratio will approach 1 to 3 (1/4 vs 3/4) over many iterations. If the first interpretation is valid, than the only valid function for f(x) that will meet the criteria regardless of where in the sequence you start from is the sequence 0111 repeating. (or 1011 or 1101 or 1110 which are the same sequence from a different starting point). Given that constraint,
g()= (f() == f())
should suffice.
As already mentioned your definition is not that good regarding probability. Usually it means that not only probability is good but distribution also. Otherwise you can simply write g(x) which will return 1,0,1,0,1,0,1,0 - it will return them 50/50, but numbers won't be random.
Another cheating approach might be:
var invert = false;
function g(x) {
invert = !invert;
if (invert) return 1-f(x);
return f(x);
}
This solution will be better than all others since it calls f(x) only one time. But the results will not be very random.
A refinement of the same approach used in btilly's answer, achieving an average ~1.85 calls to f() per g() result (further refinement documented below achieves ~1.75, tbilly's ~2.6, Jim Lewis's accepted answer ~5.33). Code appears lower in the answer.
Basically, I generate random integers in the range 0 to 3 with even probability: the caller can then test bit 0 for the first 50/50 value, and bit 1 for a second. Reason: the f() probabilities of 1/4 and 3/4 map onto quarters much more cleanly than halves.
Description of algorithm
btilly explained the algorithm, but I'll do so in my own way too...
The algorithm basically generates a random real number x between 0 and 1, then returns a result depending on which "result bucket" that number falls in:
result bucket result
x < 0.25 0
0.25 <= x < 0.5 1
0.5 <= x < 0.75 2
0.75 <= x 3
But, generating a random real number given only f() is difficult. We have to start with the knowledge that our x value should be in the range 0..1 - which we'll call our initial "possible x" space. We then hone in on an actual value for x:
each time we call f():
if f() returns 0 (probability 1 in 4), we consider x to be in the lower quarter of the "possible x" space, and eliminate the upper three quarters from that space
if f() returns 1 (probability 3 in 4), we consider x to be in the upper three-quarters of the "possible x" space, and eliminate the lower quarter from that space
when the "possible x" space is completely contained by a single result bucket, that means we've narrowed x down to the point where we know which result value it should map to and have no need to get a more specific value for x.
It may or may not help to consider this diagram :-):
"result bucket" cut-offs 0,.25,.5,.75,1
0=========0.25=========0.5==========0.75=========1 "possible x" 0..1
| | . . | f() chooses x < vs >= 0.25
| result 0 |------0.4375-------------+----------| "possible x" .25..1
| | result 1| . . | f() chooses x < vs >= 0.4375
| | | . ~0.58 . | "possible x" .4375..1
| | | . | . | f() chooses < vs >= ~.58
| | ||. | | . | 4 distinct "possible x" ranges
Code
int g() // return 0, 1, 2, or 3
{
if (f() == 0) return 0;
if (f() == 0) return 1;
double low = 0.25 + 0.25 * (1.0 - 0.25);
double high = 1.0;
while (true)
{
double cutoff = low + 0.25 * (high - low);
if (f() == 0)
high = cutoff;
else
low = cutoff;
if (high < 0.50) return 1;
if (low >= 0.75) return 3;
if (low >= 0.50 && high < 0.75) return 2;
}
}
If helpful, an intermediary to feed out 50/50 results one at a time:
int h()
{
static int i;
if (!i)
{
int x = g();
i = x | 4;
return x & 1;
}
else
{
int x = i & 2;
i = 0;
return x ? 1 : 0;
}
}
NOTE: This can be further tweaked by having the algorithm switch from considering an f()==0 result to hone in on the lower quarter, to having it hone in on the upper quarter instead, based on which on average resolves to a result bucket more quickly. Superficially, this seemed useful on the third call to f() when an upper-quarter result would indicate an immediate result of 3, while a lower-quarter result still spans probability point 0.5 and hence results 1 and 2. When I tried it, the results were actually worse. A more complex tuning was needed to see actual benefits, and I ended up writing a brute-force comparison of lower vs upper cutoff for second through eleventh calls to g(). The best result I found was an average of ~1.75, resulting from the 1st, 2nd, 5th and 8th calls to g() seeking low (i.e. setting low = cutoff).
Here is a solution based on central limit theorem, originally due to a friend of mine:
/*
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1. Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
*/
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cstdio>
using namespace std;
int f() {
if (rand() % 4 == 0) return 0;
return 1;
}
int main() {
srand(time(0));
int cc = 0;
for (int k = 0; k < 1000; k++) { //number of different runs
int c = 0;
int limit = 10000; //the bigger the limit, the more we will approach %50 percent
for (int i=0; i<limit; ++i) c+= f();
cc += c < limit*0.75 ? 0 : 1; // c will be 0, with probability %50
}
printf("%d\n",cc); //cc is gonna be around 500
return 0;
}
Since each return of f() represents a 3/4 chance of TRUE, with some algebra we can just properly balance the odds. What we want is another function x() which returns a balancing probability of TRUE, so that
function g() {
return f() && x();
}
returns true 50% of the time.
So let's find the probability of x (p(x)), given p(f) and our desired total probability (1/2):
p(f) * p(x) = 1/2
3/4 * p(x) = 1/2
p(x) = (1/2) / 3/4
p(x) = 2/3
So x() should return TRUE with a probability of 2/3, since 2/3 * 3/4 = 6/12 = 1/2;
Thus the following should work for g():
function g() {
return f() && (rand() < 2/3);
}
Assuming
P(f[x] == 0) = 1/4
P(f[x] == 1) = 3/4
and requiring a function g[x] with the following assumptions
P(g[x] == 0) = 1/2
P(g[x] == 1) = 1/2
I believe the following definition of g[x] is sufficient (Mathematica)
g[x_] := If[f[x] + f[x + 1] == 1, 1, 0]
or, alternatively in C
int g(int x)
{
return f(x) + f(x+1) == 1
? 1
: 0;
}
This is based on the idea that invocations of {f[x], f[x+1]} would produce the following outcomes
{
{0, 0},
{0, 1},
{1, 0},
{1, 1}
}
Summing each of the outcomes we have
{
0,
1,
1,
2
}
where a sum of 1 represents 1/2 of the possible sum outcomes, with any other sum making up the other 1/2.
Edit.
As bdk says - {0,0} is less likely than {1,1} because
1/4 * 1/4 < 3/4 * 3/4
However, I am confused myself because given the following definition for f[x] (Mathematica)
f[x_] := Mod[x, 4] > 0 /. {False -> 0, True -> 1}
or alternatively in C
int f(int x)
{
return (x % 4) > 0
? 1
: 0;
}
then the results obtained from executing f[x] and g[x] seem to have the expected distribution.
Table[f[x], {x, 0, 20}]
{0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0}
Table[g[x], {x, 0, 20}]
{1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1}
This is much like the Monty Hall paradox.
In general.
Public Class Form1
'the general case
'
'twiceThis = 2 is 1 in four chance of 0
'twiceThis = 3 is 1 in six chance of 0
'
'twiceThis = x is 1 in 2x chance of 0
Const twiceThis As Integer = 7
Const numOf As Integer = twiceThis * 2
Private Sub Button1_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles Button1.Click
Const tries As Integer = 1000
y = New List(Of Integer)
Dim ct0 As Integer = 0
Dim ct1 As Integer = 0
Debug.WriteLine("")
''show all possible values of fx
'For x As Integer = 1 To numOf
' Debug.WriteLine(fx)
'Next
'test that gx returns 50% 0's and 50% 1's
Dim stpw As New Stopwatch
stpw.Start()
For x As Integer = 1 To tries
Dim g_x As Integer = gx()
'Debug.WriteLine(g_x.ToString) 'used to verify that gx returns 0 or 1 randomly
If g_x = 0 Then ct0 += 1 Else ct1 += 1
Next
stpw.Stop()
'the results
Debug.WriteLine((ct0 / tries).ToString("p1"))
Debug.WriteLine((ct1 / tries).ToString("p1"))
Debug.WriteLine((stpw.ElapsedTicks / tries).ToString("n0"))
End Sub
Dim prng As New Random
Dim y As New List(Of Integer)
Private Function fx() As Integer
'1 in numOf chance of zero being returned
If y.Count = 0 Then
'reload y
y.Add(0) 'fx has only one zero value
Do
y.Add(1) 'the rest are ones
Loop While y.Count < numOf
End If
'return a random value
Dim idx As Integer = prng.Next(y.Count)
Dim rv As Integer = y(idx)
y.RemoveAt(idx) 'remove the value selected
Return rv
End Function
Private Function gx() As Integer
'a function g(x) using f(x) that 50% of the time returns 0
' that 50% of the time returns 1
Dim rv As Integer = 0
For x As Integer = 1 To twiceThis
fx()
Next
For x As Integer = 1 To twiceThis
rv += fx()
Next
If rv = twiceThis Then Return 1 Else Return 0
End Function
End Class

Calculating frames per second in a game

What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast.
Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.
You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer.
// eg.
float smoothing = 0.9; // larger=more smoothing
measurement = (measurement * smoothing) + (current * (1.0-smoothing))
By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!
This is what I have used in many games.
#define MAXSAMPLES 100
int tickindex=0;
int ticksum=0;
int ticklist[MAXSAMPLES];
/* need to zero out the ticklist array before starting */
/* average will ramp up until the buffer is full */
/* returns average ticks per frame over the MAXSAMPLES last frames */
double CalcAverageTick(int newtick)
{
ticksum-=ticklist[tickindex]; /* subtract value falling off */
ticksum+=newtick; /* add new value */
ticklist[tickindex]=newtick; /* save new value so it can be subtracted later */
if(++tickindex==MAXSAMPLES) /* inc buffer index */
tickindex=0;
/* return average */
return((double)ticksum/MAXSAMPLES);
}
Well, certainly
frames / sec = 1 / (sec / frame)
But, as you point out, there's a lot of variation in the time it takes to render a single frame, and from a UI perspective updating the fps value at the frame rate is not usable at all (unless the number is very stable).
What you want is probably a moving average or some sort of binning / resetting counter.
For example, you could maintain a queue data structure which held the rendering times for each of the last 30, 60, 100, or what-have-you frames (you could even design it so the limit was adjustable at run-time). To determine a decent fps approximation you can determine the average fps from all the rendering times in the queue:
fps = # of rendering times in queue / total rendering time
When you finish rendering a new frame you enqueue a new rendering time and dequeue an old rendering time. Alternately, you could dequeue only when the total of the rendering times exceeded some preset value (e.g. 1 sec). You can maintain the "last fps value" and a last updated timestamp so you can trigger when to update the fps figure, if you so desire. Though with a moving average if you have consistent formatting, printing the "instantaneous average" fps on each frame would probably be ok.
Another method would be to have a resetting counter. Maintain a precise (millisecond) timestamp, a frame counter, and an fps value. When you finish rendering a frame, increment the counter. When the counter hits a pre-set limit (e.g. 100 frames) or when the time since the timestamp has passed some pre-set value (e.g. 1 sec), calculate the fps:
fps = # frames / (current time - start time)
Then reset the counter to 0 and set the timestamp to the current time.
Increment a counter every time you render a screen and clear that counter for some time interval over which you want to measure the frame-rate.
Ie. Every 3 seconds, get counter/3 and then clear the counter.
There are at least two ways to do it:
The first is the one others have mentioned here before me.
I think it's the simplest and preferred way. You just to keep track of
cn: counter of how many frames you've rendered
time_start: the time since you've started counting
time_now: the current time
Calculating the fps in this case is as simple as evaluating this formula:
FPS = cn / (time_now - time_start).
Then there is the uber cool way you might like to use some day:
Let's say you have 'i' frames to consider. I'll use this notation: f[0], f[1],..., f[i-1] to describe how long it took to render frame 0, frame 1, ..., frame (i-1) respectively.
Example where i = 3
|f[0] |f[1] |f[2] |
+----------+-------------+-------+------> time
Then, mathematical definition of fps after i frames would be
(1) fps[i] = i / (f[0] + ... + f[i-1])
And the same formula but only considering i-1 frames.
(2) fps[i-1] = (i-1) / (f[0] + ... + f[i-2])
Now the trick here is to modify the right side of formula (1) in such a way that it will contain the right side of formula (2) and substitute it for it's left side.
Like so (you should see it more clearly if you write it on a paper):
fps[i] = i / (f[0] + ... + f[i-1])
= i / ((f[0] + ... + f[i-2]) + f[i-1])
= (i/(i-1)) / ((f[0] + ... + f[i-2])/(i-1) + f[i-1]/(i-1))
= (i/(i-1)) / (1/fps[i-1] + f[i-1]/(i-1))
= ...
= (i*fps[i-1]) / (f[i-1] * fps[i-1] + i - 1)
So according to this formula (my math deriving skill are a bit rusty though), to calculate the new fps you need to know the fps from the previous frame, the duration it took to render the last frame and the number of frames you've rendered.
This might be overkill for most people, that's why I hadn't posted it when I implemented it. But it's very robust and flexible.
It stores a Queue with the last frame times, so it can accurately calculate an average FPS value much better than just taking the last frame into consideration.
It also allows you to ignore one frame, if you are doing something that you know is going to artificially screw up that frame's time.
It also allows you to change the number of frames to store in the Queue as it runs, so you can test it out on the fly what is the best value for you.
// Number of past frames to use for FPS smooth calculation - because
// Unity's smoothedDeltaTime, well - it kinda sucks
private int frameTimesSize = 60;
// A Queue is the perfect data structure for the smoothed FPS task;
// new values in, old values out
private Queue<float> frameTimes;
// Not really needed, but used for faster updating then processing
// the entire queue every frame
private float __frameTimesSum = 0;
// Flag to ignore the next frame when performing a heavy one-time operation
// (like changing resolution)
private bool _fpsIgnoreNextFrame = false;
//=============================================================================
// Call this after doing a heavy operation that will screw up with FPS calculation
void FPSIgnoreNextFrame() {
this._fpsIgnoreNextFrame = true;
}
//=============================================================================
// Smoothed FPS counter updating
void Update()
{
if (this._fpsIgnoreNextFrame) {
this._fpsIgnoreNextFrame = false;
return;
}
// While looping here allows the frameTimesSize member to be changed dinamically
while (this.frameTimes.Count >= this.frameTimesSize) {
this.__frameTimesSum -= this.frameTimes.Dequeue();
}
while (this.frameTimes.Count < this.frameTimesSize) {
this.__frameTimesSum += Time.deltaTime;
this.frameTimes.Enqueue(Time.deltaTime);
}
}
//=============================================================================
// Public function to get smoothed FPS values
public int GetSmoothedFPS() {
return (int)(this.frameTimesSize / this.__frameTimesSum * Time.timeScale);
}
Good answers here. Just how you implement it is dependent on what you need it for. I prefer the running average one myself "time = time * 0.9 + last_frame * 0.1" by the guy above.
however I personally like to weight my average more heavily towards newer data because in a game it is SPIKES that are the hardest to squash and thus of most interest to me. So I would use something more like a .7 \ .3 split will make a spike show up much faster (though it's effect will drop off-screen faster as well.. see below)
If your focus is on RENDERING time, then the .9.1 split works pretty nicely b/c it tend to be more smooth. THough for gameplay/AI/physics spikes are much more of a concern as THAT will usually what makes your game look choppy (which is often worse than a low frame rate assuming we're not dipping below 20 fps)
So, what I would do is also add something like this:
#define ONE_OVER_FPS (1.0f/60.0f)
static float g_SpikeGuardBreakpoint = 3.0f * ONE_OVER_FPS;
if(time > g_SpikeGuardBreakpoint)
DoInternalBreakpoint()
(fill in 3.0f with whatever magnitude you find to be an unacceptable spike)
This will let you find and thus solve FPS issues the end of the frame they happen.
A much better system than using a large array of old framerates is to just do something like this:
new_fps = old_fps * 0.99 + new_fps * 0.01
This method uses far less memory, requires far less code, and places more importance upon recent framerates than old framerates while still smoothing the effects of sudden framerate changes.
You could keep a counter, increment it after each frame is rendered, then reset the counter when you are on a new second (storing the previous value as the last second's # of frames rendered)
JavaScript:
// Set the end and start times
var start = (new Date).getTime(), end, FPS;
/* ...
* the loop/block your want to watch
* ...
*/
end = (new Date).getTime();
// since the times are by millisecond, use 1000 (1000ms = 1s)
// then multiply the result by (MaxFPS / 1000)
// FPS = (1000 - (end - start)) * (MaxFPS / 1000)
FPS = Math.round((1000 - (end - start)) * (60 / 1000));
Here's a complete example, using Python (but easily adapted to any language). It uses the smoothing equation in Martin's answer, so almost no memory overhead, and I chose values that worked for me (feel free to play around with the constants to adapt to your use case).
import time
SMOOTHING_FACTOR = 0.99
MAX_FPS = 10000
avg_fps = -1
last_tick = time.time()
while True:
# <Do your rendering work here...>
current_tick = time.time()
# Ensure we don't get crazy large frame rates, by capping to MAX_FPS
current_fps = 1.0 / max(current_tick - last_tick, 1.0/MAX_FPS)
last_tick = current_tick
if avg_fps < 0:
avg_fps = current_fps
else:
avg_fps = (avg_fps * SMOOTHING_FACTOR) + (current_fps * (1-SMOOTHING_FACTOR))
print(avg_fps)
Set counter to zero. Each time you draw a frame increment the counter. After each second print the counter. lather, rinse, repeat. If yo want extra credit, keep a running counter and divide by the total number of seconds for a running average.
In (c++ like) pseudocode these two are what I used in industrial image processing applications that had to process images from a set of externally triggered camera's. Variations in "frame rate" had a different source (slower or faster production on the belt) but the problem is the same. (I assume that you have a simple timer.peek() call that gives you something like the nr of msec (nsec?) since application start or the last call)
Solution 1: fast but not updated every frame
do while (1)
{
ProcessImage(frame)
if (frame.framenumber%poll_interval==0)
{
new_time=timer.peek()
framerate=poll_interval/(new_time - last_time)
last_time=new_time
}
}
Solution 2: updated every frame, requires more memory and CPU
do while (1)
{
ProcessImage(frame)
new_time=timer.peek()
delta=new_time - last_time
last_time = new_time
total_time += delta
delta_history.push(delta)
framerate= delta_history.length() / total_time
while (delta_history.length() > avg_interval)
{
oldest_delta = delta_history.pop()
total_time -= oldest_delta
}
}
qx.Class.define('FpsCounter', {
extend: qx.core.Object
,properties: {
}
,events: {
}
,construct: function(){
this.base(arguments);
this.restart();
}
,statics: {
}
,members: {
restart: function(){
this.__frames = [];
}
,addFrame: function(){
this.__frames.push(new Date());
}
,getFps: function(averageFrames){
debugger;
if(!averageFrames){
averageFrames = 2;
}
var time = 0;
var l = this.__frames.length;
var i = averageFrames;
while(i > 0){
if(l - i - 1 >= 0){
time += this.__frames[l - i] - this.__frames[l - i - 1];
}
i--;
}
var fps = averageFrames / time * 1000;
return fps;
}
}
});
How i do it!
boolean run = false;
int ticks = 0;
long tickstart;
int fps;
public void loop()
{
if(this.ticks==0)
{
this.tickstart = System.currentTimeMillis();
}
this.ticks++;
this.fps = (int)this.ticks / (System.currentTimeMillis()-this.tickstart);
}
In words, a tick clock tracks ticks. If it is the first time, it takes the current time and puts it in 'tickstart'. After the first tick, it makes the variable 'fps' equal how many ticks of the tick clock divided by the time minus the time of the first tick.
Fps is an integer, hence "(int)".
Here's how I do it (in Java):
private static long ONE_SECOND = 1000000L * 1000L; //1 second is 1000ms which is 1000000ns
LinkedList<Long> frames = new LinkedList<>(); //List of frames within 1 second
public int calcFPS(){
long time = System.nanoTime(); //Current time in nano seconds
frames.add(time); //Add this frame to the list
while(true){
long f = frames.getFirst(); //Look at the first element in frames
if(time - f > ONE_SECOND){ //If it was more than 1 second ago
frames.remove(); //Remove it from the list of frames
} else break;
/*If it was within 1 second we know that all other frames in the list
* are also within 1 second
*/
}
return frames.size(); //Return the size of the list
}
In Typescript, I use this algorithm to calculate framerate and frametime averages:
let getTime = () => {
return new Date().getTime();
}
let frames: any[] = [];
let previousTime = getTime();
let framerate:number = 0;
let frametime:number = 0;
let updateStats = (samples:number=60) => {
samples = Math.max(samples, 1) >> 0;
if (frames.length === samples) {
let currentTime: number = getTime() - previousTime;
frametime = currentTime / samples;
framerate = 1000 * samples / currentTime;
previousTime = getTime();
frames = [];
}
frames.push(1);
}
usage:
statsUpdate();
// Print
stats.innerHTML = Math.round(framerate) + ' FPS ' + frametime.toFixed(2) + ' ms';
Tip: If samples is 1, the result is real-time framerate and frametime.
This is based on KPexEA's answer and gives the Simple Moving Average. Tidied and converted to TypeScript for easy copy and paste:
Variable declaration:
fpsObject = {
maxSamples: 100,
tickIndex: 0,
tickSum: 0,
tickList: []
}
Function:
calculateFps(currentFps: number): number {
this.fpsObject.tickSum -= this.fpsObject.tickList[this.fpsObject.tickIndex] || 0
this.fpsObject.tickSum += currentFps
this.fpsObject.tickList[this.fpsObject.tickIndex] = currentFps
if (++this.fpsObject.tickIndex === this.fpsObject.maxSamples) this.fpsObject.tickIndex = 0
const smoothedFps = this.fpsObject.tickSum / this.fpsObject.maxSamples
return Math.floor(smoothedFps)
}
Usage (may vary in your app):
this.fps = this.calculateFps(this.ticker.FPS)
I adapted #KPexEA's answer to Go, moved the globals into struct fields, allowed the number of samples to be configurable, and used time.Duration instead of plain integers and floats.
type FrameTimeTracker struct {
samples []time.Duration
sum time.Duration
index int
}
func NewFrameTimeTracker(n int) *FrameTimeTracker {
return &FrameTimeTracker{
samples: make([]time.Duration, n),
}
}
func (t *FrameTimeTracker) AddFrameTime(frameTime time.Duration) (average time.Duration) {
// algorithm adapted from https://stackoverflow.com/a/87732/814422
t.sum -= t.samples[t.index]
t.sum += frameTime
t.samples[t.index] = frameTime
t.index++
if t.index == len(t.samples) {
t.index = 0
}
return t.sum / time.Duration(len(t.samples))
}
The use of time.Duration, which has nanosecond precision, eliminates the need for floating-point arithmetic to compute the average frame time, but comes at the expense of needing twice as much memory for the same number of samples.
You'd use it like this:
// track the last 60 frame times
frameTimeTracker := NewFrameTimeTracker(60)
// main game loop
for frame := 0;; frame++ {
// ...
if frame > 0 {
// prevFrameTime is the duration of the last frame
avgFrameTime := frameTimeTracker.AddFrameTime(prevFrameTime)
fps := 1.0 / avgFrameTime.Seconds()
}
// ...
}
Since the context of this question is game programming, I'll add some more notes about performance and optimization. The above approach is idiomatic Go but always involves two heap allocations: one for the struct itself and one for the array backing the slice of samples. If used as indicated above, these are long-lived allocations so they won't really tax the garbage collector. Profile before optimizing, as always.
However, if performance is a major concern, some changes can be made to eliminate the allocations and indirections:
Change samples from a slice of []time.Duration to an array of [N]time.Duration where N is fixed at compile time. This removes the flexibility of changing the number of samples at runtime, but in most cases that flexibility is unnecessary.
Then, eliminate the NewFrameTimeTracker constructor function entirely and use a var frameTimeTracker FrameTimeTracker declaration (at the package level or local to main) instead. Unlike C, Go will pre-zero all relevant memory.
Unfortunately, most of the answers here don't provide either accurate enough or sufficiently "slow responsive" FPS measurements. Here's how I do it in Rust using a measurement queue:
use std::collections::VecDeque;
use std::time::{Duration, Instant};
pub struct FpsCounter {
sample_period: Duration,
max_samples: usize,
creation_time: Instant,
frame_count: usize,
measurements: VecDeque<FrameCountMeasurement>,
}
#[derive(Copy, Clone)]
struct FrameCountMeasurement {
time: Instant,
frame_count: usize,
}
impl FpsCounter {
pub fn new(sample_period: Duration, samples: usize) -> Self {
assert!(samples > 1);
Self {
sample_period,
max_samples: samples,
creation_time: Instant::now(),
frame_count: 0,
measurements: VecDeque::new(),
}
}
pub fn fps(&self) -> f32 {
match (self.measurements.front(), self.measurements.back()) {
(Some(start), Some(end)) => {
let period = (end.time - start.time).as_secs_f32();
if period > 0.0 {
(end.frame_count - start.frame_count) as f32 / period
} else {
0.0
}
}
_ => 0.0,
}
}
pub fn update(&mut self) {
self.frame_count += 1;
let current_measurement = self.measure();
let last_measurement = self
.measurements
.back()
.copied()
.unwrap_or(FrameCountMeasurement {
time: self.creation_time,
frame_count: 0,
});
if (current_measurement.time - last_measurement.time) >= self.sample_period {
self.measurements.push_back(current_measurement);
while self.measurements.len() > self.max_samples {
self.measurements.pop_front();
}
}
}
fn measure(&self) -> FrameCountMeasurement {
FrameCountMeasurement {
time: Instant::now(),
frame_count: self.frame_count,
}
}
}
How to use:
Create the counter:
let mut fps_counter = FpsCounter::new(Duration::from_millis(100), 5);
Call fps_counter.update() on every frame drawn.
Call fps_counter.fps() whenever you like to display current FPS.
Now, the key is in parameters to FpsCounter::new() method: sample_period is how responsive fps() is to changes in framerate, and samples controls how quickly fps() ramps up or down to the actual framerate. So if you choose 10 ms and 100 samples, fps() would react almost instantly to any change in framerate - basically, FPS value on the screen would jitter like crazy, but since it's 100 samples, it would take 1 second to match the actual framerate.
So my choice of 100 ms and 5 samples means that displayed FPS counter doesn't make your eyes bleed by changing crazy fast, and it would match your actual framerate half a second after it changes, which is sensible enough for a game.
Since sample_period * samples is averaging time span, you don't want it to be too short if you want a reasonably accurate FPS counter.
store a start time and increment your framecounter once per loop? every few seconds you could just print framecount/(Now - starttime) and then reinitialize them.
edit: oops. double-ninja'ed

Resources