Beacon distance shows wrong values with Eddystone - ibeacon

I am getting wrong distance when I choose Eddystone protocol to my Kontakt beacon.
For Kontakt there is different values of RSSI :
Tx Power RSSI for ibeacon # 1m RSSI for Eddystone # 0m
0 (-30dBm) -115 -74
1 (-20dBm) -84 -43
2 (-16dBm) -81 -40
3 (-12dBm) -77 -36
4 (-8dBm) -72 -31
5 (-4dBm) -69 -28
6 (0dBm) -65 -24
7 (4dBm) -59 -18
Why all the distances are so far when using Eddystone and when i use iBeacon everything works fine?
Here is an example of my code :
public static let signalLossAtOneMeter: Int = -41
public static func calculateDistance(rssi: Float, calibratedRssi: Float, calibratedDistance: Float, pathLossParameter: Float) -> Float {
return calculateDistance(rssi: rssi,
calibratedRssi: getCalibratedRssiAtOneMeter(calibratedRssi: calibratedRssi, calibratedDistance: calibratedDistance),
pathLossParameter: BeaconDistanceCalculator.pathLossParameter)
}
public static func getCalibratedRssiAtOneMeter(calibratedRssi: Float, calibratedDistance: Float) -> Float {
let calibratedRssiAtOneMeter: Float
if calibratedDistance == IBeacon.calibrationDistanceDefault {
calibratedRssiAtOneMeter = calibratedRssi
} else if calibratedDistance == Eddystone.calibrationDistanceDefault {
calibratedRssiAtOneMeter = calibratedRssi + Float(BeaconDistanceCalculator.signalLossAtOneMeter)
} else {
calibratedRssiAtOneMeter = -62
}
return calibratedRssiAtOneMeter
}
public static func calculateDistance(rssi: Float, calibratedRssi: Float, pathLossParameter: Float) -> Float {
return pow(10, (calibratedRssi - rssi) / (10 * pathLossParameter)) as Float
}

I'm not sure what the logic in the getCalibratedRssiAtOneMeter method is intending to accomplish -- this should be a fixed value for each beacon based on the strength of its transmitter in the location it is in installed. You should actually measure this rather than use a manufacturer's lookup table, because it might vary due to reflections (some nearby objects act as a "backplane" and strengthen the signal) and some others attenuate it.
The key thing for Eddystone is that it encodes is calibrated RSSI inside the beacon packet as a 0m reference value instead of iBeacon's 1m reference value. This effectively means that after reading the constant out of the Eddystone packet, you must add -41 to the constant before plugging it into your formula. This will convert a 0m reference value into a 1m reference value.
If you don't do this conversion, the distance estimates will appear way too far on Eddystone.

Related

Calculating sensing range from sensing sensitivity of the device in Castalia?

I am implementing a WSN algorithm in Castalia. I need to calculate sensing range of the sensing device. I know I will need to use the sensing sensitivity parameter but what will be the exact equation?
The answer will vary depending on the behaviour specified by the PhysicalProcess module used. Since you say in your comment that you may be using the CarsPhysicalProcess let's use that as an example.
A sensor reading request initiated by the application is first sent to the SensorManager via a SensorReadingMessage message. In SensorManager.cc you can see how this is processed in its handleMessage function:
...
case SENSOR_READING_MESSAGE: {
SensorReadingMessage *rcvPacket =check_and_cast<SensorReadingMessage*>(msg);
int sensorIndex = rcvPacket->getSensorIndex();
simtime_t currentTime = simTime();
simtime_t interval = currentTime - sensorlastSampleTime[sensorIndex];
int getNewSample = (interval < minSamplingIntervals[sensorIndex]) ? 0 : 1;
if (getNewSample) { //the last request for sample was more than minSamplingIntervals[sensorIndex] time ago
PhysicalProcessMessage *requestMsg =
new PhysicalProcessMessage("sample request", PHYSICAL_PROCESS_SAMPLING);
requestMsg->setSrcID(self); //insert information about the ID of the node
requestMsg->setSensorIndex(sensorIndex); //insert information about the index of the sensor
requestMsg->setXCoor(nodeMobilityModule->getLocation().x);
requestMsg->setYCoor(nodeMobilityModule->getLocation().y);
// send the request to the physical process (using the appropriate
// gate index for the respective sensor device )
send(requestMsg, "toNodeContainerModule", corrPhyProcess[sensorIndex]);
// update the most recent sample times in sensorlastSampleTime[]
sensorlastSampleTime[sensorIndex] = currentTime;
} else { // send back the old sample value
rcvPacket->setSensorType(sensorTypes[sensorIndex].c_str());
rcvPacket->setSensedValue(sensorLastValue[sensorIndex]);
send(rcvPacket, "toApplicationModule");
return;
}
break;
}
....
As you can see, what it's doing is first working out how much time has elapsed since the last sensor reading request for this sensor. If it's less time than specified by the minSamplingInterval possible for this sensor (this is determined by the maxSampleRates NED parameter of the SensorManager), it just returns the last sensor reading given. If it's greater, a new sensor reading is made.
A new sensor reading is made by sending a PhysicalProcessMessage message to the PhysicalProcess module (via the toNodeContainerModule gate). In the message we pass the X and Y coordinates of the node.
Now, if we have specified CarsPhysicalProcess as the physical process to be used in our omnetpp.ini file, the CarsPhysicalProcess module will receive this message. You can see this in CarsPhysicalProcess.cc:
....
case PHYSICAL_PROCESS_SAMPLING: {
PhysicalProcessMessage *phyMsg = check_and_cast < PhysicalProcessMessage * >(msg);
// get the sensed value based on node location
phyMsg->setValue(calculateScenarioReturnValue(
phyMsg->getXCoor(), phyMsg->getYCoor(), phyMsg->getSendingTime()));
// Send reply back to the node who made the request
send(phyMsg, "toNode", phyMsg->getSrcID());
return;
}
...
You can see that we calculate a sensor value based on the X and Y coordinates of the node, and the time at which the sensor reading was made. The response is sent back to the SensorManager via the toNode gate. So we need to look at the calculateScenarioReturnValue function to understand what's going on:
double CarsPhysicalProcess::calculateScenarioReturnValue(const double &x_coo,
const double &y_coo, const simtime_t &stime)
{
double retVal = 0.0f;
int i;
double linear_coeff, distance, x, y;
for (i = 0; i < max_num_cars; i++) {
if (sources_snapshots[i][1].time >= stime) {
linear_coeff = (stime - sources_snapshots[i][0].time) /
(sources_snapshots[i][1].time - sources_snapshots[i][0].time);
x = sources_snapshots[i][0].x + linear_coeff *
(sources_snapshots[i][1].x - sources_snapshots[i][0].x);
y = sources_snapshots[i][0].y + linear_coeff *
(sources_snapshots[i][1].y - sources_snapshots[i][0].y);
distance = sqrt((x_coo - x) * (x_coo - x) +
(y_coo - y) * (y_coo - y));
retVal += pow(K_PARAM * distance + 1, -A_PARAM) * car_value;
}
}
return retVal;
}
We start with a sensor return value of 0. Then we loop over every car that is on the road (if you look at the TIMER_SERVICE case statement in the handleMessage function, you will see that CarsPhysicalProcess puts cars on the road randomly according to the car_interarrival rate, up to a maximum of max_num_cars number of cars). For every car, we calculate how far the car has travelled down the road, and then calculate the distance between the car and the node. Then for each car we add to the return value based on the formula:
pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Where distance is the distance we have calculated between the car and the node, K_PARAM = 0.1, A_PARAM = 1 (defined at the top of CarsPhysicalProcess.cc) and car_value is a number specified in the CarsPhysicalProcess.ned parameter file (default is 30).
This value is passed back to the SensorManager. The SensorManager then may change this value depending on the sensitivity, resolution, noise and bias of the sensor (defined as SensorManager parameters):
....
case PHYSICAL_PROCESS_SAMPLING:
{
PhysicalProcessMessage *phyReply = check_and_cast<PhysicalProcessMessage*>(msg);
int sensorIndex = phyReply->getSensorIndex();
double theValue = phyReply->getValue();
// add the sensor's Bias and the random noise
theValue += sensorBias[sensorIndex];
theValue += normal(0, sensorNoiseSigma[sensorIndex], 1);
// process the limitations of the sensing device (sensitivity, resoultion and saturation)
if (theValue < sensorSensitivity[sensorIndex])
theValue = sensorSensitivity[sensorIndex];
if (theValue > sensorSaturation[sensorIndex])
theValue = sensorSaturation[sensorIndex];
theValue = sensorResolution[sensorIndex] * lrint(theValue / sensorResolution[sensorIndex]);
....
So you can see that if the value is below the sensitivity of the sensor, the floor of the sensitivity is returned.
So basically you can see that there is no specific 'sensing range' in Castalia - it all depends on how the specific PhysicalProcess handles the message. In the case of CarsPhysicalProcess, as long as there is a car on the road, it will always return a value, regardless of the distance - it just might be very small if the car is a long distance away from the node. If the value is very small, you may receive the lowest sensor sensitivity instead. You could increase or decrease the car_value parameter to get a stronger response from the sensor (so this is kind of like a sensor range)
EDIT---
The default sensitivity (which you can find in SensorManager.ned) is 0. Therefore for CarsPhysicalProcess, any car on the road at any distance should be detected and returned as a value greater than 0. In other words, there is an unlimited range. If the car is very, very far away it may return a number so small it becomes truncated to zero (this depends on the limits in precision of a double value in the implementation of c++)
If you wanted to implement a sensing range, you would have to set a value for devicesSensitivity in SensorManager.ned. Then in your application, you would test to see if the returned value is greater than the sensitivity value - if it is, the car is 'in range', if it is (almost) equal to the sensitivity it is out of range. I say almost because (as we have seen earlier) the SensorManager adds noise to the value returned, so for example if you have a sensitivity value of 5, and no cars, you will get values which will hover slightly around 5 (e.g. 5.0001, 4.99)
With a sensitivity value set, to calculate the sensing range (assuming only 1 car on the road), this means simply solving the equation above for distance, using the minimum sensitivity value as the returned value. i.e. if we use a sensitivity value of 5:
5 = pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Substitute values for the parameters, and use algebra to solve for distance.

Go: Converting float64 to int with multiplier

I want to convert a float64 number, let's say it 1.003 to 1003 (integer type). My implementation is simply multiply the float64 with 1000 and cast it to int.
package main
import "fmt"
func main() {
var f float64 = 1.003
fmt.Println(int(f * 1000))
}
But when I run that code, what I got is 1002 not 1003. Because Go automatically stores 1.003 as 1.002999... in the variable. What is the correct approach to do this kind of operation on Golang?
Go spec: Conversions:
Conversions between numeric types
When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero).
So basically when you convert a floating-point number to an integer, only the integer part is kept.
If you just want to avoid errors arising from representing with finite bits, just add 0.5 to the number before converting it to int. No external libraries or function calls (from standard library) required.
Since float -> int conversion is not rounding but keeping the integer part, this will give you the desired result. Taking into consideration both the possible smaller and greater representation:
1002.9999 + 0.5 = 1003.4999; integer part: 1003
1003.0001 + 0.5 = 1003.5001; integer part: 1003
So simply just write:
var f float64 = 1.003
fmt.Println(int(f * 1000 + 0.5))
To wrap this into a function:
func toint(f float64) int {
return int(f + 0.5)
}
// Using it:
fmt.Println(toint(f * 1000))
Try them on the Go Playground.
Note:
Be careful when you apply this in case of negative numbers! For example if you have a value of -1.003, then you probably want the result to be -1003. But if you add 0.5 to it:
-1002.9999 + 0.5 = -1002.4999; integer part: -1002
-1003.0001 + 0.5 = -1002.5001; integer part: -1002
So if you have negative numbers, you have to either:
subtract 0.5 instead of adding it
or add 0.5 but subtract 1 from the result
Incorporating this into our helper function:
func toint(f float64) int {
if f < 0 {
return int(f - 0.5)
}
return int(f + 0.5)
}
As Will mentions, this comes down to how floats are represented on various platforms. Essentially you need to round the float rather than let the default truncating behavior to happen. There's no standard library function for this, probably because there's a lot of possible behavior and it's trivial to implement.
If you knew you'd always have errors of the sort described, where you're slightly below (1299.999999) the value desired (1300.00000) you could use the math library's Ceil function:
f := 1.29999
n := math.Ceil(f*1000)
But if you have different kinds of floating error and want a more general sorting behavior? Use the math library's Modf function to separate the your floating point value by the decimal point:
f := 1.29999
f1,f2 := math.Modf(f*1000)
n := int(f1) // n = 1299
if f2 > .5 {
n++
}
fmt.Println(n)
You can run a slightly more generalized version of this code in the playground yourself.
This is probably likely a problem with floating points in general in most programming languages though some have different implementations than others. I wouldn't go into the intricacies here but most languages usually have a "decimal" approach either as a standard library or a third party library to get finer precision.
For instance, I've found the inf.v0 package largely useful. Underlying the library is a Dec struct that holds the exponents and the integer value. Therefore, it's able to hold 1.003 as 1003 * 10^-3. See below for an example:
package main
import (
"fmt"
"gopkg.in/inf.v0"
)
func main() {
// represents 1003 * 10^-3
someDec := inf.NewDec(1003, 3)
// multiply someDec by 1000 * 10^0
// which translates to 1003 * 10^-3 * 1000 * 10^0
someDec.Mul(someDec, inf.NewDec(1000, 0))
// inf.RoundHalfUp rounds half up in the 0th scale, eg. 0.5 rounds to 1
value, ok := someDec.Round(someDec, 0, inf.RoundHalfUp).Unscaled()
fmt.Println(value, ok)
}
Hope this helps!

Convert uint64 to int64 without loss of information

The problem with the following code:
var x uint64 = 18446744073709551615
var y int64 = int64(x)
is that y is -1. Without loss of information, is the only way to convert between these two number types to use an encoder and decoder?
buff bytes.Buffer
Encoder(buff).encode(x)
Decoder(buff).decode(y)
Note, I am not attempting a straight numeric conversion in your typical case. I am more concerned with maintaining the statistical properties of a random number generator.
Your conversion does not lose any information in the conversion. All the bits will be untouched. It is just that:
uint64(18446744073709551615) = 0xFFFFFFFFFFFFFFFF
int64(-1) = 0xFFFFFFFFFFFFFFFF
Try:
var x uint64 = 18446744073709551615 - 3
and you will have y = -4.
For instance: playground
var x uint64 = 18446744073709551615 - 3
var y int64 = int64(x)
fmt.Printf("%b\n", x)
fmt.Printf("%b or %d\n", y, y)
Output:
1111111111111111111111111111111111111111111111111111111111111100
-100 or -4
Seeing -1 would be consistent with a process running as 32bits.
See for instance the Go1.1 release notes (which introduced uint64)
x := ^uint32(0) // x is 0xffffffff
i := int(x) // i is -1 on 32-bit systems, 0xffffffff on 64-bit
fmt.Println(i)
Using fmt.Printf("%b\n", y) can help to see what is going on (see ANisus' answer)
As it turned out, the OP wheaties confirms (in the comments) it was run initially in 32 bits (hence this answer), but then realize 18446744073709551615 is 0xffffffffffffffff (-1) anyway: see ANisusanswer;
The types uint64 and int64 can both represent 2^64 discrete integer values.
The difference between the two is that uint64 holds only positive integers (0 thru 2^64-1), where as int64 holds both negative and positive integers using 1 bit to hold the sign (-2^63 thru 2^63-1).
As others have said, if your generator is producing 0xffffffffffffffff, uint64 will represent this as the raw integer (18,446,744,073,709,551,615) whereas int64 will interpret the two's complement value and return -1.

How to calculate g values from LIS3DH sensor?

I am using LIS3DH sensor with ATmega128 to get the acceleration values to get motion. I went through the datasheet but it seemed inadequate so I decided to post it here. From other posts I am convinced that the sensor resolution is 12 bit instead of 16 bit. I need to know that when finding g value from the x-axis output register, do we calculate the two'2 complement of the register values only when the sign bit MSB of OUT_X_H (High bit register) is 1 or every time even when this bit is 0.
From my calculations I think that we calculate two's complement only when MSB of OUT_X_H register is 1.
But the datasheet says that we need to calculate two's complement of both OUT_X_L and OUT_X_H every time.
Could anyone enlighten me on this ?
Sample code
int main(void)
{
stdout = &uart_str;
UCSRB=0x18; // RXEN=1, TXEN=1
UCSRC=0x06; // no parit, 1-bit stop, 8-bit data
UBRRH=0;
UBRRL=71; // baud 9600
timer_init();
TWBR=216; // 400HZ
TWSR=0x03;
TWCR |= (1<<TWINT)|(1<<TWSTA)|(0<<TWSTO)|(1<<TWEN);//TWCR=0x04;
printf("\r\nLIS3D address: %x\r\n",twi_master_getchar(0x0F));
twi_master_putchar(0x23, 0b000100000);
printf("\r\nControl 4 register 0x23: %x", twi_master_getchar(0x23));
printf("\r\nStatus register %x", twi_master_getchar(0x27));
twi_master_putchar(0x20, 0x77);
DDRB=0xFF;
PORTB=0xFD;
SREG=0x80; //sei();
while(1)
{
process();
}
}
void process(void){
x_l = twi_master_getchar(0x28);
x_h = twi_master_getchar(0x29);
y_l = twi_master_getchar(0x2a);
y_h = twi_master_getchar(0x2b);
z_l = twi_master_getchar(0x2c);
z_h = twi_master_getchar(0x2d);
xvalue = (short int)(x_l+(x_h<<8));
yvalue = (short int)(y_l+(y_h<<8));
zvalue = (short int)(z_l+(z_h<<8));
printf("\r\nx_val: %ldg", x_val);
printf("\r\ny_val: %ldg", y_val);
printf("\r\nz_val: %ldg", z_val);
}
I wrote the CTRL_REG4 as 0x10(4g) but when I read them I got 0x20(8g). This seems bit bizarre.
Do not compute the 2s complement. That has the effect of making the result the negative of what it was.
Instead, the datasheet tells us the result is already a signed value. That is, 0 is not the lowest value; it is in the middle of the scale. (0xffff is just a little less than zero, not the highest value.)
Also, the result is always 16-bit, but the result is not meant to be taken to be that accurate. You can set a control register value to to generate more accurate values at the expense of current consumption, but it is still not guaranteed to be accurate to the last bit.
the datasheet does not say (at least the register description in chapter 8.2) you have to calculate the 2' complement but stated that the contents of the 2 registers is in 2's complement.
so all you have to do is receive the two bytes and cast it to an int16_t to get the signed raw value.
uint8_t xl = 0x00;
uint8_t xh = 0xFC;
int16_t x = (int16_t)((((uint16)xh) << 8) | xl);
or
uint8_t xa[2] {0x00, 0xFC}; // little endian: lower byte to lower address
int16_t x = *((int16*)xa);
(hope i did not mixed something up with this)
I have another approach, which may be easier to implement as the compiler will do all of the work for you. The compiler will probably do it most efficiently and with no bugs too.
Read the raw data into the raw field in:
typedef union
{
struct
{
// in low power - 8 significant bits, left justified
int16 reserved : 8;
int16 value : 8;
} lowPower;
struct
{
// in normal power - 10 significant bits, left justified
int16 reserved : 6;
int16 value : 10;
} normalPower;
struct
{
// in high resolution - 12 significant bits, left justified
int16 reserved : 4;
int16 value : 12;
} highPower;
// the raw data as read from registers H and L
uint16 raw;
} LIS3DH_RAW_CONVERTER_T;
than use the value needed according to the power mode you are using.
Note: In this example, bit fields structs are BIG ENDIANS.
Check if you need to reverse the order of 'value' and 'reserved'.
The LISxDH sensors are 2's complement, left-justified. They can be set to 12-bit, 10-bit, or 8-bit resolution. This is read from the sensor as two 8-bit values (LSB, MSB) that need to be assembled together.
If you set the resolution to 8-bit, just can just cast LSB to int8, which is the likely your processor's representation of 2's complement (8bit). Likewise, if it were possible to set the sensor to 16-bit resolution, you could just cast that to an int16.
However, if the value is 10-bit left justified, the sign bit is in the wrong place for an int16. Here is how you convert it to int16 (16-bit 2's complement).
1.Read LSB, MSB from the sensor:
[MMMM MMMM] [LL00 0000]
[1001 0101] [1100 0000] //example = [0x95] [0xC0] (note that the LSB comes before MSB on the sensor)
2.Assemble the bytes, keeping in mind the LSB is left-justified.
//---As an example....
uint8_t byteMSB = 0x95; //[1001 0101]
uint8_t byteLSB = 0xC0; //[1100 0000]
//---Cast to U16 to make room, then combine the bytes---
assembledValue = ( (uint16_t)(byteMSB) << UINT8_LEN ) | (uint16_t)byteLSB;
/*[MMMM MMMM LL00 0000]
[1001 0101 1100 0000] = 0x95C0 */
//---Shift to right justify---
assembledValue >>= (INT16_LEN-numBits);
/*[0000 00MM MMMM MMLL]
[0000 0010 0101 0111] = 0x0257 */
3.Convert from 10-bit 2's complement (now right-justified) to an int16 (which is just 16-bit 2's complement on most platforms).
Approach #1: If the sign bit (in our example, the tenth bit) = 0, then just cast it to int16 (since positive numbers are represented the same in 10-bit 2's complement and 16-bit 2's complement).
If the sign bit = 1, then invert the bits (keeping just the 10bits), add 1 to the result, then multiply by -1 (as per the definition of 2's complement).
convertedValueI16 = ~assembledValue; //invert bits
convertedValueI16 &= ( 0xFFFF>>(16-numBits) ); //but keep just the 10-bits
convertedValueI16 += 1; //add 1
convertedValueI16 *=-1; //multiply by -1
/*Note that the last two lines could be replaced by convertedValueI16 = ~convertedValueI16;*/
//result = -425 = 0xFE57 = [1111 1110 0101 0111]
Approach#2: Zero the sign bit (10th bit) and subtract out half the range 1<<9
//----Zero the sign bit (tenth bit)----
convertedValueI16 = (int16_t)( assembledValue^( 0x0001<<(numBits-1) ) );
/*Result = 87 = 0x57 [0000 0000 0101 0111]*/
//----Subtract out half the range----
convertedValueI16 -= ( (int16_t)(1)<<(numBits-1) );
[0000 0000 0101 0111]
-[0000 0010 0000 0000]
= [1111 1110 0101 0111];
/*Result = 87 - 512 = -425 = 0xFE57
Link to script to try out (not optimized): http://tpcg.io/NHmBRR

Calculating frames per second in a game

What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast.
Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.
You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer.
// eg.
float smoothing = 0.9; // larger=more smoothing
measurement = (measurement * smoothing) + (current * (1.0-smoothing))
By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!
This is what I have used in many games.
#define MAXSAMPLES 100
int tickindex=0;
int ticksum=0;
int ticklist[MAXSAMPLES];
/* need to zero out the ticklist array before starting */
/* average will ramp up until the buffer is full */
/* returns average ticks per frame over the MAXSAMPLES last frames */
double CalcAverageTick(int newtick)
{
ticksum-=ticklist[tickindex]; /* subtract value falling off */
ticksum+=newtick; /* add new value */
ticklist[tickindex]=newtick; /* save new value so it can be subtracted later */
if(++tickindex==MAXSAMPLES) /* inc buffer index */
tickindex=0;
/* return average */
return((double)ticksum/MAXSAMPLES);
}
Well, certainly
frames / sec = 1 / (sec / frame)
But, as you point out, there's a lot of variation in the time it takes to render a single frame, and from a UI perspective updating the fps value at the frame rate is not usable at all (unless the number is very stable).
What you want is probably a moving average or some sort of binning / resetting counter.
For example, you could maintain a queue data structure which held the rendering times for each of the last 30, 60, 100, or what-have-you frames (you could even design it so the limit was adjustable at run-time). To determine a decent fps approximation you can determine the average fps from all the rendering times in the queue:
fps = # of rendering times in queue / total rendering time
When you finish rendering a new frame you enqueue a new rendering time and dequeue an old rendering time. Alternately, you could dequeue only when the total of the rendering times exceeded some preset value (e.g. 1 sec). You can maintain the "last fps value" and a last updated timestamp so you can trigger when to update the fps figure, if you so desire. Though with a moving average if you have consistent formatting, printing the "instantaneous average" fps on each frame would probably be ok.
Another method would be to have a resetting counter. Maintain a precise (millisecond) timestamp, a frame counter, and an fps value. When you finish rendering a frame, increment the counter. When the counter hits a pre-set limit (e.g. 100 frames) or when the time since the timestamp has passed some pre-set value (e.g. 1 sec), calculate the fps:
fps = # frames / (current time - start time)
Then reset the counter to 0 and set the timestamp to the current time.
Increment a counter every time you render a screen and clear that counter for some time interval over which you want to measure the frame-rate.
Ie. Every 3 seconds, get counter/3 and then clear the counter.
There are at least two ways to do it:
The first is the one others have mentioned here before me.
I think it's the simplest and preferred way. You just to keep track of
cn: counter of how many frames you've rendered
time_start: the time since you've started counting
time_now: the current time
Calculating the fps in this case is as simple as evaluating this formula:
FPS = cn / (time_now - time_start).
Then there is the uber cool way you might like to use some day:
Let's say you have 'i' frames to consider. I'll use this notation: f[0], f[1],..., f[i-1] to describe how long it took to render frame 0, frame 1, ..., frame (i-1) respectively.
Example where i = 3
|f[0] |f[1] |f[2] |
+----------+-------------+-------+------> time
Then, mathematical definition of fps after i frames would be
(1) fps[i] = i / (f[0] + ... + f[i-1])
And the same formula but only considering i-1 frames.
(2) fps[i-1] = (i-1) / (f[0] + ... + f[i-2])
Now the trick here is to modify the right side of formula (1) in such a way that it will contain the right side of formula (2) and substitute it for it's left side.
Like so (you should see it more clearly if you write it on a paper):
fps[i] = i / (f[0] + ... + f[i-1])
= i / ((f[0] + ... + f[i-2]) + f[i-1])
= (i/(i-1)) / ((f[0] + ... + f[i-2])/(i-1) + f[i-1]/(i-1))
= (i/(i-1)) / (1/fps[i-1] + f[i-1]/(i-1))
= ...
= (i*fps[i-1]) / (f[i-1] * fps[i-1] + i - 1)
So according to this formula (my math deriving skill are a bit rusty though), to calculate the new fps you need to know the fps from the previous frame, the duration it took to render the last frame and the number of frames you've rendered.
This might be overkill for most people, that's why I hadn't posted it when I implemented it. But it's very robust and flexible.
It stores a Queue with the last frame times, so it can accurately calculate an average FPS value much better than just taking the last frame into consideration.
It also allows you to ignore one frame, if you are doing something that you know is going to artificially screw up that frame's time.
It also allows you to change the number of frames to store in the Queue as it runs, so you can test it out on the fly what is the best value for you.
// Number of past frames to use for FPS smooth calculation - because
// Unity's smoothedDeltaTime, well - it kinda sucks
private int frameTimesSize = 60;
// A Queue is the perfect data structure for the smoothed FPS task;
// new values in, old values out
private Queue<float> frameTimes;
// Not really needed, but used for faster updating then processing
// the entire queue every frame
private float __frameTimesSum = 0;
// Flag to ignore the next frame when performing a heavy one-time operation
// (like changing resolution)
private bool _fpsIgnoreNextFrame = false;
//=============================================================================
// Call this after doing a heavy operation that will screw up with FPS calculation
void FPSIgnoreNextFrame() {
this._fpsIgnoreNextFrame = true;
}
//=============================================================================
// Smoothed FPS counter updating
void Update()
{
if (this._fpsIgnoreNextFrame) {
this._fpsIgnoreNextFrame = false;
return;
}
// While looping here allows the frameTimesSize member to be changed dinamically
while (this.frameTimes.Count >= this.frameTimesSize) {
this.__frameTimesSum -= this.frameTimes.Dequeue();
}
while (this.frameTimes.Count < this.frameTimesSize) {
this.__frameTimesSum += Time.deltaTime;
this.frameTimes.Enqueue(Time.deltaTime);
}
}
//=============================================================================
// Public function to get smoothed FPS values
public int GetSmoothedFPS() {
return (int)(this.frameTimesSize / this.__frameTimesSum * Time.timeScale);
}
Good answers here. Just how you implement it is dependent on what you need it for. I prefer the running average one myself "time = time * 0.9 + last_frame * 0.1" by the guy above.
however I personally like to weight my average more heavily towards newer data because in a game it is SPIKES that are the hardest to squash and thus of most interest to me. So I would use something more like a .7 \ .3 split will make a spike show up much faster (though it's effect will drop off-screen faster as well.. see below)
If your focus is on RENDERING time, then the .9.1 split works pretty nicely b/c it tend to be more smooth. THough for gameplay/AI/physics spikes are much more of a concern as THAT will usually what makes your game look choppy (which is often worse than a low frame rate assuming we're not dipping below 20 fps)
So, what I would do is also add something like this:
#define ONE_OVER_FPS (1.0f/60.0f)
static float g_SpikeGuardBreakpoint = 3.0f * ONE_OVER_FPS;
if(time > g_SpikeGuardBreakpoint)
DoInternalBreakpoint()
(fill in 3.0f with whatever magnitude you find to be an unacceptable spike)
This will let you find and thus solve FPS issues the end of the frame they happen.
A much better system than using a large array of old framerates is to just do something like this:
new_fps = old_fps * 0.99 + new_fps * 0.01
This method uses far less memory, requires far less code, and places more importance upon recent framerates than old framerates while still smoothing the effects of sudden framerate changes.
You could keep a counter, increment it after each frame is rendered, then reset the counter when you are on a new second (storing the previous value as the last second's # of frames rendered)
JavaScript:
// Set the end and start times
var start = (new Date).getTime(), end, FPS;
/* ...
* the loop/block your want to watch
* ...
*/
end = (new Date).getTime();
// since the times are by millisecond, use 1000 (1000ms = 1s)
// then multiply the result by (MaxFPS / 1000)
// FPS = (1000 - (end - start)) * (MaxFPS / 1000)
FPS = Math.round((1000 - (end - start)) * (60 / 1000));
Here's a complete example, using Python (but easily adapted to any language). It uses the smoothing equation in Martin's answer, so almost no memory overhead, and I chose values that worked for me (feel free to play around with the constants to adapt to your use case).
import time
SMOOTHING_FACTOR = 0.99
MAX_FPS = 10000
avg_fps = -1
last_tick = time.time()
while True:
# <Do your rendering work here...>
current_tick = time.time()
# Ensure we don't get crazy large frame rates, by capping to MAX_FPS
current_fps = 1.0 / max(current_tick - last_tick, 1.0/MAX_FPS)
last_tick = current_tick
if avg_fps < 0:
avg_fps = current_fps
else:
avg_fps = (avg_fps * SMOOTHING_FACTOR) + (current_fps * (1-SMOOTHING_FACTOR))
print(avg_fps)
Set counter to zero. Each time you draw a frame increment the counter. After each second print the counter. lather, rinse, repeat. If yo want extra credit, keep a running counter and divide by the total number of seconds for a running average.
In (c++ like) pseudocode these two are what I used in industrial image processing applications that had to process images from a set of externally triggered camera's. Variations in "frame rate" had a different source (slower or faster production on the belt) but the problem is the same. (I assume that you have a simple timer.peek() call that gives you something like the nr of msec (nsec?) since application start or the last call)
Solution 1: fast but not updated every frame
do while (1)
{
ProcessImage(frame)
if (frame.framenumber%poll_interval==0)
{
new_time=timer.peek()
framerate=poll_interval/(new_time - last_time)
last_time=new_time
}
}
Solution 2: updated every frame, requires more memory and CPU
do while (1)
{
ProcessImage(frame)
new_time=timer.peek()
delta=new_time - last_time
last_time = new_time
total_time += delta
delta_history.push(delta)
framerate= delta_history.length() / total_time
while (delta_history.length() > avg_interval)
{
oldest_delta = delta_history.pop()
total_time -= oldest_delta
}
}
qx.Class.define('FpsCounter', {
extend: qx.core.Object
,properties: {
}
,events: {
}
,construct: function(){
this.base(arguments);
this.restart();
}
,statics: {
}
,members: {
restart: function(){
this.__frames = [];
}
,addFrame: function(){
this.__frames.push(new Date());
}
,getFps: function(averageFrames){
debugger;
if(!averageFrames){
averageFrames = 2;
}
var time = 0;
var l = this.__frames.length;
var i = averageFrames;
while(i > 0){
if(l - i - 1 >= 0){
time += this.__frames[l - i] - this.__frames[l - i - 1];
}
i--;
}
var fps = averageFrames / time * 1000;
return fps;
}
}
});
How i do it!
boolean run = false;
int ticks = 0;
long tickstart;
int fps;
public void loop()
{
if(this.ticks==0)
{
this.tickstart = System.currentTimeMillis();
}
this.ticks++;
this.fps = (int)this.ticks / (System.currentTimeMillis()-this.tickstart);
}
In words, a tick clock tracks ticks. If it is the first time, it takes the current time and puts it in 'tickstart'. After the first tick, it makes the variable 'fps' equal how many ticks of the tick clock divided by the time minus the time of the first tick.
Fps is an integer, hence "(int)".
Here's how I do it (in Java):
private static long ONE_SECOND = 1000000L * 1000L; //1 second is 1000ms which is 1000000ns
LinkedList<Long> frames = new LinkedList<>(); //List of frames within 1 second
public int calcFPS(){
long time = System.nanoTime(); //Current time in nano seconds
frames.add(time); //Add this frame to the list
while(true){
long f = frames.getFirst(); //Look at the first element in frames
if(time - f > ONE_SECOND){ //If it was more than 1 second ago
frames.remove(); //Remove it from the list of frames
} else break;
/*If it was within 1 second we know that all other frames in the list
* are also within 1 second
*/
}
return frames.size(); //Return the size of the list
}
In Typescript, I use this algorithm to calculate framerate and frametime averages:
let getTime = () => {
return new Date().getTime();
}
let frames: any[] = [];
let previousTime = getTime();
let framerate:number = 0;
let frametime:number = 0;
let updateStats = (samples:number=60) => {
samples = Math.max(samples, 1) >> 0;
if (frames.length === samples) {
let currentTime: number = getTime() - previousTime;
frametime = currentTime / samples;
framerate = 1000 * samples / currentTime;
previousTime = getTime();
frames = [];
}
frames.push(1);
}
usage:
statsUpdate();
// Print
stats.innerHTML = Math.round(framerate) + ' FPS ' + frametime.toFixed(2) + ' ms';
Tip: If samples is 1, the result is real-time framerate and frametime.
This is based on KPexEA's answer and gives the Simple Moving Average. Tidied and converted to TypeScript for easy copy and paste:
Variable declaration:
fpsObject = {
maxSamples: 100,
tickIndex: 0,
tickSum: 0,
tickList: []
}
Function:
calculateFps(currentFps: number): number {
this.fpsObject.tickSum -= this.fpsObject.tickList[this.fpsObject.tickIndex] || 0
this.fpsObject.tickSum += currentFps
this.fpsObject.tickList[this.fpsObject.tickIndex] = currentFps
if (++this.fpsObject.tickIndex === this.fpsObject.maxSamples) this.fpsObject.tickIndex = 0
const smoothedFps = this.fpsObject.tickSum / this.fpsObject.maxSamples
return Math.floor(smoothedFps)
}
Usage (may vary in your app):
this.fps = this.calculateFps(this.ticker.FPS)
I adapted #KPexEA's answer to Go, moved the globals into struct fields, allowed the number of samples to be configurable, and used time.Duration instead of plain integers and floats.
type FrameTimeTracker struct {
samples []time.Duration
sum time.Duration
index int
}
func NewFrameTimeTracker(n int) *FrameTimeTracker {
return &FrameTimeTracker{
samples: make([]time.Duration, n),
}
}
func (t *FrameTimeTracker) AddFrameTime(frameTime time.Duration) (average time.Duration) {
// algorithm adapted from https://stackoverflow.com/a/87732/814422
t.sum -= t.samples[t.index]
t.sum += frameTime
t.samples[t.index] = frameTime
t.index++
if t.index == len(t.samples) {
t.index = 0
}
return t.sum / time.Duration(len(t.samples))
}
The use of time.Duration, which has nanosecond precision, eliminates the need for floating-point arithmetic to compute the average frame time, but comes at the expense of needing twice as much memory for the same number of samples.
You'd use it like this:
// track the last 60 frame times
frameTimeTracker := NewFrameTimeTracker(60)
// main game loop
for frame := 0;; frame++ {
// ...
if frame > 0 {
// prevFrameTime is the duration of the last frame
avgFrameTime := frameTimeTracker.AddFrameTime(prevFrameTime)
fps := 1.0 / avgFrameTime.Seconds()
}
// ...
}
Since the context of this question is game programming, I'll add some more notes about performance and optimization. The above approach is idiomatic Go but always involves two heap allocations: one for the struct itself and one for the array backing the slice of samples. If used as indicated above, these are long-lived allocations so they won't really tax the garbage collector. Profile before optimizing, as always.
However, if performance is a major concern, some changes can be made to eliminate the allocations and indirections:
Change samples from a slice of []time.Duration to an array of [N]time.Duration where N is fixed at compile time. This removes the flexibility of changing the number of samples at runtime, but in most cases that flexibility is unnecessary.
Then, eliminate the NewFrameTimeTracker constructor function entirely and use a var frameTimeTracker FrameTimeTracker declaration (at the package level or local to main) instead. Unlike C, Go will pre-zero all relevant memory.
Unfortunately, most of the answers here don't provide either accurate enough or sufficiently "slow responsive" FPS measurements. Here's how I do it in Rust using a measurement queue:
use std::collections::VecDeque;
use std::time::{Duration, Instant};
pub struct FpsCounter {
sample_period: Duration,
max_samples: usize,
creation_time: Instant,
frame_count: usize,
measurements: VecDeque<FrameCountMeasurement>,
}
#[derive(Copy, Clone)]
struct FrameCountMeasurement {
time: Instant,
frame_count: usize,
}
impl FpsCounter {
pub fn new(sample_period: Duration, samples: usize) -> Self {
assert!(samples > 1);
Self {
sample_period,
max_samples: samples,
creation_time: Instant::now(),
frame_count: 0,
measurements: VecDeque::new(),
}
}
pub fn fps(&self) -> f32 {
match (self.measurements.front(), self.measurements.back()) {
(Some(start), Some(end)) => {
let period = (end.time - start.time).as_secs_f32();
if period > 0.0 {
(end.frame_count - start.frame_count) as f32 / period
} else {
0.0
}
}
_ => 0.0,
}
}
pub fn update(&mut self) {
self.frame_count += 1;
let current_measurement = self.measure();
let last_measurement = self
.measurements
.back()
.copied()
.unwrap_or(FrameCountMeasurement {
time: self.creation_time,
frame_count: 0,
});
if (current_measurement.time - last_measurement.time) >= self.sample_period {
self.measurements.push_back(current_measurement);
while self.measurements.len() > self.max_samples {
self.measurements.pop_front();
}
}
}
fn measure(&self) -> FrameCountMeasurement {
FrameCountMeasurement {
time: Instant::now(),
frame_count: self.frame_count,
}
}
}
How to use:
Create the counter:
let mut fps_counter = FpsCounter::new(Duration::from_millis(100), 5);
Call fps_counter.update() on every frame drawn.
Call fps_counter.fps() whenever you like to display current FPS.
Now, the key is in parameters to FpsCounter::new() method: sample_period is how responsive fps() is to changes in framerate, and samples controls how quickly fps() ramps up or down to the actual framerate. So if you choose 10 ms and 100 samples, fps() would react almost instantly to any change in framerate - basically, FPS value on the screen would jitter like crazy, but since it's 100 samples, it would take 1 second to match the actual framerate.
So my choice of 100 ms and 5 samples means that displayed FPS counter doesn't make your eyes bleed by changing crazy fast, and it would match your actual framerate half a second after it changes, which is sensible enough for a game.
Since sample_period * samples is averaging time span, you don't want it to be too short if you want a reasonably accurate FPS counter.
store a start time and increment your framecounter once per loop? every few seconds you could just print framecount/(Now - starttime) and then reinitialize them.
edit: oops. double-ninja'ed

Resources