Calculating frames per second in a game - algorithm

What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast.
Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.

You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer.
// eg.
float smoothing = 0.9; // larger=more smoothing
measurement = (measurement * smoothing) + (current * (1.0-smoothing))
By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!

This is what I have used in many games.
#define MAXSAMPLES 100
int tickindex=0;
int ticksum=0;
int ticklist[MAXSAMPLES];
/* need to zero out the ticklist array before starting */
/* average will ramp up until the buffer is full */
/* returns average ticks per frame over the MAXSAMPLES last frames */
double CalcAverageTick(int newtick)
{
ticksum-=ticklist[tickindex]; /* subtract value falling off */
ticksum+=newtick; /* add new value */
ticklist[tickindex]=newtick; /* save new value so it can be subtracted later */
if(++tickindex==MAXSAMPLES) /* inc buffer index */
tickindex=0;
/* return average */
return((double)ticksum/MAXSAMPLES);
}

Well, certainly
frames / sec = 1 / (sec / frame)
But, as you point out, there's a lot of variation in the time it takes to render a single frame, and from a UI perspective updating the fps value at the frame rate is not usable at all (unless the number is very stable).
What you want is probably a moving average or some sort of binning / resetting counter.
For example, you could maintain a queue data structure which held the rendering times for each of the last 30, 60, 100, or what-have-you frames (you could even design it so the limit was adjustable at run-time). To determine a decent fps approximation you can determine the average fps from all the rendering times in the queue:
fps = # of rendering times in queue / total rendering time
When you finish rendering a new frame you enqueue a new rendering time and dequeue an old rendering time. Alternately, you could dequeue only when the total of the rendering times exceeded some preset value (e.g. 1 sec). You can maintain the "last fps value" and a last updated timestamp so you can trigger when to update the fps figure, if you so desire. Though with a moving average if you have consistent formatting, printing the "instantaneous average" fps on each frame would probably be ok.
Another method would be to have a resetting counter. Maintain a precise (millisecond) timestamp, a frame counter, and an fps value. When you finish rendering a frame, increment the counter. When the counter hits a pre-set limit (e.g. 100 frames) or when the time since the timestamp has passed some pre-set value (e.g. 1 sec), calculate the fps:
fps = # frames / (current time - start time)
Then reset the counter to 0 and set the timestamp to the current time.

Increment a counter every time you render a screen and clear that counter for some time interval over which you want to measure the frame-rate.
Ie. Every 3 seconds, get counter/3 and then clear the counter.

There are at least two ways to do it:
The first is the one others have mentioned here before me.
I think it's the simplest and preferred way. You just to keep track of
cn: counter of how many frames you've rendered
time_start: the time since you've started counting
time_now: the current time
Calculating the fps in this case is as simple as evaluating this formula:
FPS = cn / (time_now - time_start).
Then there is the uber cool way you might like to use some day:
Let's say you have 'i' frames to consider. I'll use this notation: f[0], f[1],..., f[i-1] to describe how long it took to render frame 0, frame 1, ..., frame (i-1) respectively.
Example where i = 3
|f[0] |f[1] |f[2] |
+----------+-------------+-------+------> time
Then, mathematical definition of fps after i frames would be
(1) fps[i] = i / (f[0] + ... + f[i-1])
And the same formula but only considering i-1 frames.
(2) fps[i-1] = (i-1) / (f[0] + ... + f[i-2])
Now the trick here is to modify the right side of formula (1) in such a way that it will contain the right side of formula (2) and substitute it for it's left side.
Like so (you should see it more clearly if you write it on a paper):
fps[i] = i / (f[0] + ... + f[i-1])
= i / ((f[0] + ... + f[i-2]) + f[i-1])
= (i/(i-1)) / ((f[0] + ... + f[i-2])/(i-1) + f[i-1]/(i-1))
= (i/(i-1)) / (1/fps[i-1] + f[i-1]/(i-1))
= ...
= (i*fps[i-1]) / (f[i-1] * fps[i-1] + i - 1)
So according to this formula (my math deriving skill are a bit rusty though), to calculate the new fps you need to know the fps from the previous frame, the duration it took to render the last frame and the number of frames you've rendered.

This might be overkill for most people, that's why I hadn't posted it when I implemented it. But it's very robust and flexible.
It stores a Queue with the last frame times, so it can accurately calculate an average FPS value much better than just taking the last frame into consideration.
It also allows you to ignore one frame, if you are doing something that you know is going to artificially screw up that frame's time.
It also allows you to change the number of frames to store in the Queue as it runs, so you can test it out on the fly what is the best value for you.
// Number of past frames to use for FPS smooth calculation - because
// Unity's smoothedDeltaTime, well - it kinda sucks
private int frameTimesSize = 60;
// A Queue is the perfect data structure for the smoothed FPS task;
// new values in, old values out
private Queue<float> frameTimes;
// Not really needed, but used for faster updating then processing
// the entire queue every frame
private float __frameTimesSum = 0;
// Flag to ignore the next frame when performing a heavy one-time operation
// (like changing resolution)
private bool _fpsIgnoreNextFrame = false;
//=============================================================================
// Call this after doing a heavy operation that will screw up with FPS calculation
void FPSIgnoreNextFrame() {
this._fpsIgnoreNextFrame = true;
}
//=============================================================================
// Smoothed FPS counter updating
void Update()
{
if (this._fpsIgnoreNextFrame) {
this._fpsIgnoreNextFrame = false;
return;
}
// While looping here allows the frameTimesSize member to be changed dinamically
while (this.frameTimes.Count >= this.frameTimesSize) {
this.__frameTimesSum -= this.frameTimes.Dequeue();
}
while (this.frameTimes.Count < this.frameTimesSize) {
this.__frameTimesSum += Time.deltaTime;
this.frameTimes.Enqueue(Time.deltaTime);
}
}
//=============================================================================
// Public function to get smoothed FPS values
public int GetSmoothedFPS() {
return (int)(this.frameTimesSize / this.__frameTimesSum * Time.timeScale);
}

Good answers here. Just how you implement it is dependent on what you need it for. I prefer the running average one myself "time = time * 0.9 + last_frame * 0.1" by the guy above.
however I personally like to weight my average more heavily towards newer data because in a game it is SPIKES that are the hardest to squash and thus of most interest to me. So I would use something more like a .7 \ .3 split will make a spike show up much faster (though it's effect will drop off-screen faster as well.. see below)
If your focus is on RENDERING time, then the .9.1 split works pretty nicely b/c it tend to be more smooth. THough for gameplay/AI/physics spikes are much more of a concern as THAT will usually what makes your game look choppy (which is often worse than a low frame rate assuming we're not dipping below 20 fps)
So, what I would do is also add something like this:
#define ONE_OVER_FPS (1.0f/60.0f)
static float g_SpikeGuardBreakpoint = 3.0f * ONE_OVER_FPS;
if(time > g_SpikeGuardBreakpoint)
DoInternalBreakpoint()
(fill in 3.0f with whatever magnitude you find to be an unacceptable spike)
This will let you find and thus solve FPS issues the end of the frame they happen.

A much better system than using a large array of old framerates is to just do something like this:
new_fps = old_fps * 0.99 + new_fps * 0.01
This method uses far less memory, requires far less code, and places more importance upon recent framerates than old framerates while still smoothing the effects of sudden framerate changes.

You could keep a counter, increment it after each frame is rendered, then reset the counter when you are on a new second (storing the previous value as the last second's # of frames rendered)

JavaScript:
// Set the end and start times
var start = (new Date).getTime(), end, FPS;
/* ...
* the loop/block your want to watch
* ...
*/
end = (new Date).getTime();
// since the times are by millisecond, use 1000 (1000ms = 1s)
// then multiply the result by (MaxFPS / 1000)
// FPS = (1000 - (end - start)) * (MaxFPS / 1000)
FPS = Math.round((1000 - (end - start)) * (60 / 1000));

Here's a complete example, using Python (but easily adapted to any language). It uses the smoothing equation in Martin's answer, so almost no memory overhead, and I chose values that worked for me (feel free to play around with the constants to adapt to your use case).
import time
SMOOTHING_FACTOR = 0.99
MAX_FPS = 10000
avg_fps = -1
last_tick = time.time()
while True:
# <Do your rendering work here...>
current_tick = time.time()
# Ensure we don't get crazy large frame rates, by capping to MAX_FPS
current_fps = 1.0 / max(current_tick - last_tick, 1.0/MAX_FPS)
last_tick = current_tick
if avg_fps < 0:
avg_fps = current_fps
else:
avg_fps = (avg_fps * SMOOTHING_FACTOR) + (current_fps * (1-SMOOTHING_FACTOR))
print(avg_fps)

Set counter to zero. Each time you draw a frame increment the counter. After each second print the counter. lather, rinse, repeat. If yo want extra credit, keep a running counter and divide by the total number of seconds for a running average.

In (c++ like) pseudocode these two are what I used in industrial image processing applications that had to process images from a set of externally triggered camera's. Variations in "frame rate" had a different source (slower or faster production on the belt) but the problem is the same. (I assume that you have a simple timer.peek() call that gives you something like the nr of msec (nsec?) since application start or the last call)
Solution 1: fast but not updated every frame
do while (1)
{
ProcessImage(frame)
if (frame.framenumber%poll_interval==0)
{
new_time=timer.peek()
framerate=poll_interval/(new_time - last_time)
last_time=new_time
}
}
Solution 2: updated every frame, requires more memory and CPU
do while (1)
{
ProcessImage(frame)
new_time=timer.peek()
delta=new_time - last_time
last_time = new_time
total_time += delta
delta_history.push(delta)
framerate= delta_history.length() / total_time
while (delta_history.length() > avg_interval)
{
oldest_delta = delta_history.pop()
total_time -= oldest_delta
}
}

qx.Class.define('FpsCounter', {
extend: qx.core.Object
,properties: {
}
,events: {
}
,construct: function(){
this.base(arguments);
this.restart();
}
,statics: {
}
,members: {
restart: function(){
this.__frames = [];
}
,addFrame: function(){
this.__frames.push(new Date());
}
,getFps: function(averageFrames){
debugger;
if(!averageFrames){
averageFrames = 2;
}
var time = 0;
var l = this.__frames.length;
var i = averageFrames;
while(i > 0){
if(l - i - 1 >= 0){
time += this.__frames[l - i] - this.__frames[l - i - 1];
}
i--;
}
var fps = averageFrames / time * 1000;
return fps;
}
}
});

How i do it!
boolean run = false;
int ticks = 0;
long tickstart;
int fps;
public void loop()
{
if(this.ticks==0)
{
this.tickstart = System.currentTimeMillis();
}
this.ticks++;
this.fps = (int)this.ticks / (System.currentTimeMillis()-this.tickstart);
}
In words, a tick clock tracks ticks. If it is the first time, it takes the current time and puts it in 'tickstart'. After the first tick, it makes the variable 'fps' equal how many ticks of the tick clock divided by the time minus the time of the first tick.
Fps is an integer, hence "(int)".

Here's how I do it (in Java):
private static long ONE_SECOND = 1000000L * 1000L; //1 second is 1000ms which is 1000000ns
LinkedList<Long> frames = new LinkedList<>(); //List of frames within 1 second
public int calcFPS(){
long time = System.nanoTime(); //Current time in nano seconds
frames.add(time); //Add this frame to the list
while(true){
long f = frames.getFirst(); //Look at the first element in frames
if(time - f > ONE_SECOND){ //If it was more than 1 second ago
frames.remove(); //Remove it from the list of frames
} else break;
/*If it was within 1 second we know that all other frames in the list
* are also within 1 second
*/
}
return frames.size(); //Return the size of the list
}

In Typescript, I use this algorithm to calculate framerate and frametime averages:
let getTime = () => {
return new Date().getTime();
}
let frames: any[] = [];
let previousTime = getTime();
let framerate:number = 0;
let frametime:number = 0;
let updateStats = (samples:number=60) => {
samples = Math.max(samples, 1) >> 0;
if (frames.length === samples) {
let currentTime: number = getTime() - previousTime;
frametime = currentTime / samples;
framerate = 1000 * samples / currentTime;
previousTime = getTime();
frames = [];
}
frames.push(1);
}
usage:
statsUpdate();
// Print
stats.innerHTML = Math.round(framerate) + ' FPS ' + frametime.toFixed(2) + ' ms';
Tip: If samples is 1, the result is real-time framerate and frametime.

This is based on KPexEA's answer and gives the Simple Moving Average. Tidied and converted to TypeScript for easy copy and paste:
Variable declaration:
fpsObject = {
maxSamples: 100,
tickIndex: 0,
tickSum: 0,
tickList: []
}
Function:
calculateFps(currentFps: number): number {
this.fpsObject.tickSum -= this.fpsObject.tickList[this.fpsObject.tickIndex] || 0
this.fpsObject.tickSum += currentFps
this.fpsObject.tickList[this.fpsObject.tickIndex] = currentFps
if (++this.fpsObject.tickIndex === this.fpsObject.maxSamples) this.fpsObject.tickIndex = 0
const smoothedFps = this.fpsObject.tickSum / this.fpsObject.maxSamples
return Math.floor(smoothedFps)
}
Usage (may vary in your app):
this.fps = this.calculateFps(this.ticker.FPS)

I adapted #KPexEA's answer to Go, moved the globals into struct fields, allowed the number of samples to be configurable, and used time.Duration instead of plain integers and floats.
type FrameTimeTracker struct {
samples []time.Duration
sum time.Duration
index int
}
func NewFrameTimeTracker(n int) *FrameTimeTracker {
return &FrameTimeTracker{
samples: make([]time.Duration, n),
}
}
func (t *FrameTimeTracker) AddFrameTime(frameTime time.Duration) (average time.Duration) {
// algorithm adapted from https://stackoverflow.com/a/87732/814422
t.sum -= t.samples[t.index]
t.sum += frameTime
t.samples[t.index] = frameTime
t.index++
if t.index == len(t.samples) {
t.index = 0
}
return t.sum / time.Duration(len(t.samples))
}
The use of time.Duration, which has nanosecond precision, eliminates the need for floating-point arithmetic to compute the average frame time, but comes at the expense of needing twice as much memory for the same number of samples.
You'd use it like this:
// track the last 60 frame times
frameTimeTracker := NewFrameTimeTracker(60)
// main game loop
for frame := 0;; frame++ {
// ...
if frame > 0 {
// prevFrameTime is the duration of the last frame
avgFrameTime := frameTimeTracker.AddFrameTime(prevFrameTime)
fps := 1.0 / avgFrameTime.Seconds()
}
// ...
}
Since the context of this question is game programming, I'll add some more notes about performance and optimization. The above approach is idiomatic Go but always involves two heap allocations: one for the struct itself and one for the array backing the slice of samples. If used as indicated above, these are long-lived allocations so they won't really tax the garbage collector. Profile before optimizing, as always.
However, if performance is a major concern, some changes can be made to eliminate the allocations and indirections:
Change samples from a slice of []time.Duration to an array of [N]time.Duration where N is fixed at compile time. This removes the flexibility of changing the number of samples at runtime, but in most cases that flexibility is unnecessary.
Then, eliminate the NewFrameTimeTracker constructor function entirely and use a var frameTimeTracker FrameTimeTracker declaration (at the package level or local to main) instead. Unlike C, Go will pre-zero all relevant memory.

Unfortunately, most of the answers here don't provide either accurate enough or sufficiently "slow responsive" FPS measurements. Here's how I do it in Rust using a measurement queue:
use std::collections::VecDeque;
use std::time::{Duration, Instant};
pub struct FpsCounter {
sample_period: Duration,
max_samples: usize,
creation_time: Instant,
frame_count: usize,
measurements: VecDeque<FrameCountMeasurement>,
}
#[derive(Copy, Clone)]
struct FrameCountMeasurement {
time: Instant,
frame_count: usize,
}
impl FpsCounter {
pub fn new(sample_period: Duration, samples: usize) -> Self {
assert!(samples > 1);
Self {
sample_period,
max_samples: samples,
creation_time: Instant::now(),
frame_count: 0,
measurements: VecDeque::new(),
}
}
pub fn fps(&self) -> f32 {
match (self.measurements.front(), self.measurements.back()) {
(Some(start), Some(end)) => {
let period = (end.time - start.time).as_secs_f32();
if period > 0.0 {
(end.frame_count - start.frame_count) as f32 / period
} else {
0.0
}
}
_ => 0.0,
}
}
pub fn update(&mut self) {
self.frame_count += 1;
let current_measurement = self.measure();
let last_measurement = self
.measurements
.back()
.copied()
.unwrap_or(FrameCountMeasurement {
time: self.creation_time,
frame_count: 0,
});
if (current_measurement.time - last_measurement.time) >= self.sample_period {
self.measurements.push_back(current_measurement);
while self.measurements.len() > self.max_samples {
self.measurements.pop_front();
}
}
}
fn measure(&self) -> FrameCountMeasurement {
FrameCountMeasurement {
time: Instant::now(),
frame_count: self.frame_count,
}
}
}
How to use:
Create the counter:
let mut fps_counter = FpsCounter::new(Duration::from_millis(100), 5);
Call fps_counter.update() on every frame drawn.
Call fps_counter.fps() whenever you like to display current FPS.
Now, the key is in parameters to FpsCounter::new() method: sample_period is how responsive fps() is to changes in framerate, and samples controls how quickly fps() ramps up or down to the actual framerate. So if you choose 10 ms and 100 samples, fps() would react almost instantly to any change in framerate - basically, FPS value on the screen would jitter like crazy, but since it's 100 samples, it would take 1 second to match the actual framerate.
So my choice of 100 ms and 5 samples means that displayed FPS counter doesn't make your eyes bleed by changing crazy fast, and it would match your actual framerate half a second after it changes, which is sensible enough for a game.
Since sample_period * samples is averaging time span, you don't want it to be too short if you want a reasonably accurate FPS counter.

store a start time and increment your framecounter once per loop? every few seconds you could just print framecount/(Now - starttime) and then reinitialize them.
edit: oops. double-ninja'ed

Related

Calculating sensing range from sensing sensitivity of the device in Castalia?

I am implementing a WSN algorithm in Castalia. I need to calculate sensing range of the sensing device. I know I will need to use the sensing sensitivity parameter but what will be the exact equation?
The answer will vary depending on the behaviour specified by the PhysicalProcess module used. Since you say in your comment that you may be using the CarsPhysicalProcess let's use that as an example.
A sensor reading request initiated by the application is first sent to the SensorManager via a SensorReadingMessage message. In SensorManager.cc you can see how this is processed in its handleMessage function:
...
case SENSOR_READING_MESSAGE: {
SensorReadingMessage *rcvPacket =check_and_cast<SensorReadingMessage*>(msg);
int sensorIndex = rcvPacket->getSensorIndex();
simtime_t currentTime = simTime();
simtime_t interval = currentTime - sensorlastSampleTime[sensorIndex];
int getNewSample = (interval < minSamplingIntervals[sensorIndex]) ? 0 : 1;
if (getNewSample) { //the last request for sample was more than minSamplingIntervals[sensorIndex] time ago
PhysicalProcessMessage *requestMsg =
new PhysicalProcessMessage("sample request", PHYSICAL_PROCESS_SAMPLING);
requestMsg->setSrcID(self); //insert information about the ID of the node
requestMsg->setSensorIndex(sensorIndex); //insert information about the index of the sensor
requestMsg->setXCoor(nodeMobilityModule->getLocation().x);
requestMsg->setYCoor(nodeMobilityModule->getLocation().y);
// send the request to the physical process (using the appropriate
// gate index for the respective sensor device )
send(requestMsg, "toNodeContainerModule", corrPhyProcess[sensorIndex]);
// update the most recent sample times in sensorlastSampleTime[]
sensorlastSampleTime[sensorIndex] = currentTime;
} else { // send back the old sample value
rcvPacket->setSensorType(sensorTypes[sensorIndex].c_str());
rcvPacket->setSensedValue(sensorLastValue[sensorIndex]);
send(rcvPacket, "toApplicationModule");
return;
}
break;
}
....
As you can see, what it's doing is first working out how much time has elapsed since the last sensor reading request for this sensor. If it's less time than specified by the minSamplingInterval possible for this sensor (this is determined by the maxSampleRates NED parameter of the SensorManager), it just returns the last sensor reading given. If it's greater, a new sensor reading is made.
A new sensor reading is made by sending a PhysicalProcessMessage message to the PhysicalProcess module (via the toNodeContainerModule gate). In the message we pass the X and Y coordinates of the node.
Now, if we have specified CarsPhysicalProcess as the physical process to be used in our omnetpp.ini file, the CarsPhysicalProcess module will receive this message. You can see this in CarsPhysicalProcess.cc:
....
case PHYSICAL_PROCESS_SAMPLING: {
PhysicalProcessMessage *phyMsg = check_and_cast < PhysicalProcessMessage * >(msg);
// get the sensed value based on node location
phyMsg->setValue(calculateScenarioReturnValue(
phyMsg->getXCoor(), phyMsg->getYCoor(), phyMsg->getSendingTime()));
// Send reply back to the node who made the request
send(phyMsg, "toNode", phyMsg->getSrcID());
return;
}
...
You can see that we calculate a sensor value based on the X and Y coordinates of the node, and the time at which the sensor reading was made. The response is sent back to the SensorManager via the toNode gate. So we need to look at the calculateScenarioReturnValue function to understand what's going on:
double CarsPhysicalProcess::calculateScenarioReturnValue(const double &x_coo,
const double &y_coo, const simtime_t &stime)
{
double retVal = 0.0f;
int i;
double linear_coeff, distance, x, y;
for (i = 0; i < max_num_cars; i++) {
if (sources_snapshots[i][1].time >= stime) {
linear_coeff = (stime - sources_snapshots[i][0].time) /
(sources_snapshots[i][1].time - sources_snapshots[i][0].time);
x = sources_snapshots[i][0].x + linear_coeff *
(sources_snapshots[i][1].x - sources_snapshots[i][0].x);
y = sources_snapshots[i][0].y + linear_coeff *
(sources_snapshots[i][1].y - sources_snapshots[i][0].y);
distance = sqrt((x_coo - x) * (x_coo - x) +
(y_coo - y) * (y_coo - y));
retVal += pow(K_PARAM * distance + 1, -A_PARAM) * car_value;
}
}
return retVal;
}
We start with a sensor return value of 0. Then we loop over every car that is on the road (if you look at the TIMER_SERVICE case statement in the handleMessage function, you will see that CarsPhysicalProcess puts cars on the road randomly according to the car_interarrival rate, up to a maximum of max_num_cars number of cars). For every car, we calculate how far the car has travelled down the road, and then calculate the distance between the car and the node. Then for each car we add to the return value based on the formula:
pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Where distance is the distance we have calculated between the car and the node, K_PARAM = 0.1, A_PARAM = 1 (defined at the top of CarsPhysicalProcess.cc) and car_value is a number specified in the CarsPhysicalProcess.ned parameter file (default is 30).
This value is passed back to the SensorManager. The SensorManager then may change this value depending on the sensitivity, resolution, noise and bias of the sensor (defined as SensorManager parameters):
....
case PHYSICAL_PROCESS_SAMPLING:
{
PhysicalProcessMessage *phyReply = check_and_cast<PhysicalProcessMessage*>(msg);
int sensorIndex = phyReply->getSensorIndex();
double theValue = phyReply->getValue();
// add the sensor's Bias and the random noise
theValue += sensorBias[sensorIndex];
theValue += normal(0, sensorNoiseSigma[sensorIndex], 1);
// process the limitations of the sensing device (sensitivity, resoultion and saturation)
if (theValue < sensorSensitivity[sensorIndex])
theValue = sensorSensitivity[sensorIndex];
if (theValue > sensorSaturation[sensorIndex])
theValue = sensorSaturation[sensorIndex];
theValue = sensorResolution[sensorIndex] * lrint(theValue / sensorResolution[sensorIndex]);
....
So you can see that if the value is below the sensitivity of the sensor, the floor of the sensitivity is returned.
So basically you can see that there is no specific 'sensing range' in Castalia - it all depends on how the specific PhysicalProcess handles the message. In the case of CarsPhysicalProcess, as long as there is a car on the road, it will always return a value, regardless of the distance - it just might be very small if the car is a long distance away from the node. If the value is very small, you may receive the lowest sensor sensitivity instead. You could increase or decrease the car_value parameter to get a stronger response from the sensor (so this is kind of like a sensor range)
EDIT---
The default sensitivity (which you can find in SensorManager.ned) is 0. Therefore for CarsPhysicalProcess, any car on the road at any distance should be detected and returned as a value greater than 0. In other words, there is an unlimited range. If the car is very, very far away it may return a number so small it becomes truncated to zero (this depends on the limits in precision of a double value in the implementation of c++)
If you wanted to implement a sensing range, you would have to set a value for devicesSensitivity in SensorManager.ned. Then in your application, you would test to see if the returned value is greater than the sensitivity value - if it is, the car is 'in range', if it is (almost) equal to the sensitivity it is out of range. I say almost because (as we have seen earlier) the SensorManager adds noise to the value returned, so for example if you have a sensitivity value of 5, and no cars, you will get values which will hover slightly around 5 (e.g. 5.0001, 4.99)
With a sensitivity value set, to calculate the sensing range (assuming only 1 car on the road), this means simply solving the equation above for distance, using the minimum sensitivity value as the returned value. i.e. if we use a sensitivity value of 5:
5 = pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Substitute values for the parameters, and use algebra to solve for distance.

IoT fault detection algorithm implementation

I am attempting to implement a sensor fault detection algorithm from a white paper I found here: http://www.hindawi.com/journals/mpe/2013/712028/ref/
My math skills are decent, but this article does not give great detail on how everything is set up.
My current implementation looks something like the following:
/*******************************************************************************
How this algorithm works:
1) There exists W historical windows which hold the distribution objects (mean, variance)
for that window. Each window is of size m (the sliding window size)
2) There exists a current window which changes every iteration by popping the
oldest value into the buffer window.
3) There exists a buffer window which takes the oldest values from the current window
each iteration. Once the buffer window reaches size m
it then becomes the newest historical window.
*******************************************************************************/
int m = 10; //Statistics sliding window size
float outlierDetectionThreshold; // The outlier detection threshold for sensor s, also called epsilon
List<float> U; // Holds the last 10 windows mean
List<float> V; // Holds the last 10 windows variance
List<float> CurrentWindow; // Holds the last m values
procedure GFD()
do
get a value vi
Detection(vi)
while not end
return
procedure Detection(vi)
init outlierDetectionThreshold
init U and V, loading last m distribution characteristics from DB
init CurrentWindow loading the last m - 1 values
Xi; // What is this?
Tau; // What is this?
Insert vi into CurrentWindow // CurrentWindow now has the m latest values
float CurrentWindowMean = Mean(CurrentWindow)
float CurrentWindowVariance = Variance(CurrentWindow)
if (IsStuck(CurrentWindowVariance) or IsSpikes(vi))
return
If (IsOutlier(vi) and not IsRatStatChagne(vi))
return;
IsRatStatChagne(vi);
return
procedure IsStuck(variance)
if (variance == 0)
return true;
return false;
procedure IsSpike(windowMean, windowVariance, historicalMeans, historicalVariances, xi, tau)
if ( (mean / Mean(historicalMeans)) < xi)
if ( (variance / Mean(historicalVariances)) > tau)
return true;
return false;
procedure IsOutlier(historicalMeans, historicalVariances, outlierDetectionThreshold)
// use historicalMeans and historicalVariances to calculate theta
if (theta > outlierDetectionThreshold)
return true;
I am running into difficulty implementing the IsOutlier and IsRatStatChange functions.
In IsSpike, how are xi and tau calculated, or what do they represent?
For the IsOutlier function, how is theta calculated?
For the IsRatStatStange function, I have not looked into as much yet, but does anyone have a solid grasp to write this?
Any other insights you gleam would be most appreciated.
Thanks in advance.

How to find if the second hand of a clock lies in the larger area or smaller one [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I have been given the time in the format: hh:mm:ss
I have to find if for a given time, the second hand lies in the larger or smaller area formed by the hour and minute hands?
I know that the hour hand moves at the rate of 0.5 degrees per minute,
the minute hand moves at the rate of of 6 degrees per minute and the second hand completes 360 degrees per minute.
But I am unable to find out in which area the second hand lies. So how can I do this?
Second hand within angle between hour and minute hands:
10:15:00
04:40:30
Second hand in reflex angle:
12:01:30
The problem intrigued me so I went ahead and wrote a test project in C#. As far as I can tell it works, you will have to test it to make sure though.
This is the code:
string strTime = "10:15:00";
DateTime dt = DateTime.ParseExact(strTime, "HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture);
int nHourDegrees = (360 / 12) * dt.Hour;
int nMinuteDegrees = (360 / 60) * dt.Minute;
int nSecondDegrees = (360 / 60) * dt.Second;
if (nHourDegrees > nMinuteDegrees)
{
int nArea1 = nHourDegrees - nMinuteDegrees;
int nArea2 = 360 - nArea1;
bool bArea1IsBigger = (nArea1 >= nArea2);
if (nSecondDegrees <= nHourDegrees && nSecondDegrees >= nMinuteDegrees)
{
//Second hand lies in area1
if (bArea1IsBigger)
{
Console.WriteLine("Second hand is in the larger area");
}
else
{
Console.WriteLine("Second hand is in the smaller area");
}
}
else
{
if (bArea1IsBigger)
{
Console.WriteLine("Second hand is in the smaller area");
}
else
{
Console.WriteLine("Second hand is in the larger area");
}
}
}
else if (nMinuteDegrees > nHourDegrees)
{
int nArea1 = nMinuteDegrees - nHourDegrees;
int nArea2 = 360 - nArea1;
bool bArea1IsBigger = (nArea1 >= nArea2);
if (nSecondDegrees <= nMinuteDegrees && nSecondDegrees >= nHourDegrees)
{
//Second hand lies in area1
if (bArea1IsBigger)
{
Console.WriteLine("Second hand is in the larger area");
}
else
{
Console.WriteLine("Second hand is in the smaller area");
}
}
else
{
if (bArea1IsBigger)
{
Console.WriteLine("Second hand is in the smaller area");
}
else
{
Console.WriteLine("Second hand is in the larger area");
}
}
}
else
{
if (nSecondDegrees == nHourDegrees)
{
Console.WriteLine("Second hand is on both of the other hands");
}
else
{
Console.WriteLine("Second hand is in the ONLY area");
}
}
The idea is that we find the areas between the Hour and Minute hands. Then check to see if the second hand is inside this area. We also compare this area to the other one and then we can easily deduce if the second hand is in the smaller or larger of the two.
Note: Some improvements can be made to the code:
It can be significantly shorter - I haven't done this because it was mainly a test/proof of how this can be done
If the hours overrun (i.e. 24 hour clock not 12) an alteration will have to be made. i.e. minus 12
If the times go to 12/60/60 and not back to 0, this will have to be done manually
Constants can be added in to remove the need for magic numbers
Similar to the above but common calculations like 360 / 12 can be moved to constants
It can be broken out into separate methods to improve readability
Can be moved into a helper method i.e. bool IsInLargerArea(string timeString)
Need to add the case for when the areas are the same size, at the moment I just assume Area1 to be bigger if they are equal i.e. >= (greater than or equal to)
Add checks to make sure there are only 3 parts to the straTimes array
And possibly some more things that I have not thought of
I would go with method which look like this. You have to identify, if the smaller area goes through 0 degree or not and then based on that you can say the solution.
int minDegree;
int maxDegree;
bool over360;
if (Math.abs(hourHandDegree - minuteHandDegree) < 180){
minDegree = Math.min(hourHandDegree, minuteHandDegree);
maxDegree = Math.max(hourHandDegree, minuteHandDegree);
over360 = false;
} else {
minDegree = Math.min(hourHandDegree, minuteHandDegree);
maxDegree = Math.max(hourHandDegree, minuteHandDegree);
over360 = true;
}
if (over360){
if ((secondHandDegree < minDegree) && (secondHandDegree > maxDegree)){
return true;
} else {
return false;
}
} else {
if ((secondHandDegree > minDegree) && (secondHandDegree < maxDegree)){
return true;
} else {
return false;
}
}
This solution is concise, easy to understand, and allows 12-hour or 24-hour input.
It is easier to visualize when you use radians.
The following is in R, but hopefully easy enough to read. Note %% is modulo operator.
which_region <- function(s){
# number of seconds elapsed in day (i.e., since midnight)
sec <- as.numeric(as.POSIXct(s, tz = "GMT", format = "%H:%M:%S")) %% (12*60*60)
# angle of each hand, clockwise from vertical, in radians
hour_ang <- 2*pi * (sec / (12*60*60)) # hour makes a circuit every 12*60*60 sec
min_ang <- 2*pi * ((sec / 60^2) %% 1) # min makes a circuit every 60*60 sec
sec_ang <- 2*pi * ((sec / 60) %% 1) # sec makes a circuit every 60 sec
hour_to_min_ang <- (2*pi + min_ang - hour_ang) %% (2*pi)
min_to_hr_ang <- (2*pi + hour_ang - min_ang) %% (2*pi)
if(hour_to_min_ang < min_to_hr_ang){
return(ifelse(sec_ang > hour_ang & sec_ang < min_ang,
"Smaller Area","Larger Area") )
} else if(min_to_hr_ang < hour_to_min_ang){
return(ifelse(sec_ang > min_ang & sec_ang < hour_ang,
"Smaller Area","Larger Area") )
} else return("Equal")
}
which_region("06:00:00") # Equal
which_region("01:10:00") # Larger Area
which_region("01:20:15") # Smaller Area
which_region("05:10:20") # Smaller Area
which_region("12:00:00") # Equal
which_region("21:55:50") # Smaller Area
which_region("10:55:15 PM") # Larger Area

Can a program calculate the complexity of an algorithm?

Is there any way to compute the time complexity of an algorithm programatically? For example, how could I calculate the complexity of a fibonacci(n) function?
The undecidability of the halting problem says that you can't even tell if an algorithm terminates. I'm pretty sure from this it follows that you can't generally solve the complexity of an algorithm.
While it's impossible to do for all cases (unless you run your own code parser and just look at loops and what impacts on their values and such), it is still possible to do as a black box test with an upper bound time set. That is to say, have some variable that is set to determine that once a program's execution passes this time it's considered to be running forever.
From this your code would look similar to this (quick and dirty code sorry it's a little verbose and the math might be off for larger powers I haven't checked).
It can be improved upon by using a set array of input values rather than randomly generating some, and also by checking a wider range of values, you should be able to check any input versus any other two inputs and determine all the patterns of method duration.
I'm sure there are much better (namely more accurate) ways to calculate the O between a set of given numbers than shown here (which neglects to relate the run time between elements too much).
static void Main(string[] args)
{
var sw = new Stopwatch();
var inputTimes = new Dictionary<int, double>();
List<int> inputValues = new List<int>();
for (int i = 0; i < 25; i++)
{
inputValues.Add(i);
}
var ThreadTimeout = 10000;
for (int i = 0; i < inputValues.Count; i++)
{
int input = inputValues[i];
var WorkerThread = new Thread(t => CallMagicMethod(input)) { Name = "WorkerThread" };
sw.Reset();
Console.WriteLine("Input value '{0}' running...", input);
sw.Start();
WorkerThread.Start();
WorkerThread.Join(ThreadTimeout);
sw.Stop();
if (WorkerThread.IsAlive)
{
Console.WriteLine("Input value '{0}' exceeds timeout", input);
WorkerThread.Abort();
//break;
inputTimes.Add(input, double.MaxValue);
continue;
}
inputTimes.Add(input, sw.Elapsed.TotalMilliseconds);
Console.WriteLine("Input value '{0}' took {1}ms", input, sw.Elapsed.TotalMilliseconds);
}
List<int> indexes = inputTimes.Keys.OrderBy(k => k).ToList();
// calculate the difference between the values:
for (int i = 0; i < indexes.Count - 2; i++)
{
int index0 = indexes[i];
int index1 = indexes[i + 1];
if (!inputTimes.ContainsKey(index1))
{
continue;
}
int index2 = indexes[i + 2];
if (!inputTimes.ContainsKey(index2))
{
continue;
}
double[] runTimes = new double[] { inputTimes[index0], inputTimes[index1], inputTimes[index2] };
if (IsRoughlyEqual(runTimes[2], runTimes[1], runTimes[0]))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(1)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / Math.Log(index2, 2), runTimes[1] / Math.Log(index1, 2), runTimes[0] / Math.Log(index0, 2)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(log N)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / index2, runTimes[1] / index1, runTimes[0] / index0))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N)", index0, index2);
}
else if (IsRoughlyEqual(runTimes[2] / (Math.Log(index2, 2) * index2), runTimes[1] / (Math.Log(index1, 2) * index1), runTimes[0] / (Math.Log(index0, 2) * index0)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N log N)", index0, index2);
}
else
{
for (int pow = 2; pow <= 10; pow++)
{
if (IsRoughlyEqual(runTimes[2] / Math.Pow(index2, pow), runTimes[1] / Math.Pow(index1, pow), runTimes[0] / Math.Pow(index0, pow)))
{
Console.WriteLine("Execution time for input = {0} to {1} is roughly O(N^{2})", index0, index2, pow);
break;
}
else if (pow == 10)
{
Console.WriteLine("Execution time for input = {0} to {1} is greater than O(N^10)", index0, index2);
}
}
}
}
Console.WriteLine("Fin.");
}
private static double variance = 0.02;
public static bool IsRoughlyEqual(double value, double lower, double upper)
{
//returns if the lower, value and upper are within a variance of the next value;
return IsBetween(lower, value * (1 - variance), value * (1 + variance)) &&
IsBetween(value, upper * (1 - variance), upper * (1 + variance));
}
public static bool IsBetween(double value, double lower, double upper)
{
//returns if the value is between the other 2 values +/- variance
lower = lower * (1 - variance);
upper = upper * (1 + variance);
return value > lower && value < upper;
}
public static void CallMagicMethod(int input)
{
try
{
MagicBox.MagicMethod(input);
}
catch (ThreadAbortException tae)
{
}
catch (Exception ex)
{
Console.WriteLine("Unexpected Exception Occured: {0}", ex.Message);
}
}
And an example output:
Input value '59' running...
Input value '59' took 1711.8416ms
Input value '14' running...
Input value '14' took 90.9222ms
Input value '43' running...
Input value '43' took 902.7444ms
Input value '22' running...
Input value '22' took 231.5498ms
Input value '50' running...
Input value '50' took 1224.761ms
Input value '27' running...
Input value '27' took 351.3938ms
Input value '5' running...
Input value '5' took 9.8048ms
Input value '28' running...
Input value '28' took 377.8156ms
Input value '26' running...
Input value '26' took 325.4898ms
Input value '46' running...
Input value '46' took 1035.6526ms
Execution time for input = 5 to 22 is greater than O(N^10)
Execution time for input = 14 to 26 is roughly O(N^2)
Execution time for input = 22 to 27 is roughly O(N^2)
Execution time for input = 26 to 28 is roughly O(N^2)
Execution time for input = 27 to 43 is roughly O(N^2)
Execution time for input = 28 to 46 is roughly O(N^2)
Execution time for input = 43 to 50 is roughly O(N^2)
Execution time for input = 46 to 59 is roughly O(N^2)
Fin.
Which shows the magic method is likely O(N^2) for the given inputs +/- 2% variance
and another result here:
Input value '0' took 0.7498ms
Input value '1' took 0.3062ms
Input value '2' took 0.5038ms
Input value '3' took 4.9239ms
Input value '4' took 14.2928ms
Input value '5' took 29.9069ms
Input value '6' took 55.4424ms
Input value '7' took 91.6886ms
Input value '8' took 140.5015ms
Input value '9' took 204.5546ms
Input value '10' took 285.4843ms
Input value '11' took 385.7506ms
Input value '12' took 506.8602ms
Input value '13' took 650.7438ms
Input value '14' took 819.8519ms
Input value '15' took 1015.8124ms
Execution time for input = 0 to 2 is greater than O(N^10)
Execution time for input = 1 to 3 is greater than O(N^10)
Execution time for input = 2 to 4 is greater than O(N^10)
Execution time for input = 3 to 5 is greater than O(N^10)
Execution time for input = 4 to 6 is greater than O(N^10)
Execution time for input = 5 to 7 is greater than O(N^10)
Execution time for input = 6 to 8 is greater than O(N^10)
Execution time for input = 7 to 9 is greater than O(N^10)
Execution time for input = 8 to 10 is roughly O(N^3)
Execution time for input = 9 to 11 is roughly O(N^3)
Execution time for input = 10 to 12 is roughly O(N^3)
Execution time for input = 11 to 13 is roughly O(N^3)
Execution time for input = 12 to 14 is roughly O(N^3)
Execution time for input = 13 to 15 is roughly O(N^3)
Which shows the magic method is likely O(N^3) for the given inputs +/- 2% variance
So It is possible to programatically determine the complexity of an algorithm, you need to make sure that you do not introduce some additional work which causes it to be longer than you think (such as building all the input for the function before you start timing it).
Further to this you also need to remember that this is going to take a significant time to try a large series of possible values and return how long it took, a more realistic test is to just call your function at a large realistic upper bound value and determine if it's response time is sufficient for your usage.
You likely would only need to do this if you are performing black box testing without source code (and can't use something like Reflector to view the source), or if you have to prove to a PHB that the coded algorithms are as fast as it can be (ignoring improvements to constants), as you claim it is.
Not in general. If the algorithm consists of nested simple for loops, e.g.
for (int i=a; i<b; ++i)
then you know this will contribute (b-a) steps. Now, if either b or a or both depends on n, then you can get a complexity from that. However, if you have something more exotic, like
for (int i=a; i<b; i=whackyFunction(i))
then you really need to understand what whackyFunction(i) does.
Similarly, break statements may screw this up, and while statements may be a lost cause since it's possible you wouldn't even be able to tell if the loop terminated.
Count arithmetic operations, memory accesses and memory space used inside fibbonacci() or whatever it is, measure its execution time. Do this with different inputs, see the emerging trends, the asymptotic behavior.
General measures like cyclomatic complexity are useful in giving you an idea of the more complex portions of your code, but it is a relatively simple mechanism.

Algorithm to order 'tag line' campaigns based on resulting sales

I want to be able to introduce new 'tag lines' into a database that are shown 'randomly' to users. (These tag lines are shown as an introduction as animated text.)
Based upon the number of sales that result from those taglines I'd like the good ones to trickle to the top, but still show the others less frequently.
I could come up with a basic algorithm quite easily but I want something thats a little more 'statistically accurate'.
I dont really know where to start. Its been a while since I've done anything more than basic statistics. My model would need to be sensitive to tolerances, but obviously it doesnt need to be worthy of a PHD.
Edit: I am currently tracking a 'conversion rate' - i.e. hits per order. This value would probably be best calculated as a cumulative 'all time' convertsion rate to be fed into the algorithm.
Looking at your problem, I would modify the requirements a bit -
1) The most popular one should be shown most often.
2) Taglines should "age", so one that got a lot of votes (purchase) in the past, but none recently should be shown less often
3) Brand new taglines should be shown more often during their first days.
If you agree on those, then a algorithm could be something like:
START:
x = random(1, 3);
if x = 3 goto NEW else goto NORMAL
NEW:
TagVec = Taglines.filterYounger(5 days); // I'm taking a LOT of liberties with the pseudo code,,,
x = random(1, TagVec.Length);
return tagVec[x-1]; // 0 indexed vectors even in made up language,
NORMAL:
// Similar to EBGREEN above
sum = 0;
ForEach(TagLine in TagLines) {
sum += TagLine.noOfPurhcases;
}
x = random(1, sum);
ForEach(TagLine in TagLines) {
x -= TagLine.noOfPurchase;
if ( x > 0) return TagLine; // Find the TagLine that represent our random number
}
Now, as a setup I would give every new tagline 10 purchases, to avoid getting really big slanting for one single purchase.
The aging process I would count a purchase older than a week as 0.8 purhcase per week of age. So 1 week old gives 0.8 points, 2 weeks give 0.8*0.8 = 0,64 and so forth...
You would have to play around with the Initial purhcases parameter (10 in my example) and the aging speed (1 week here) and the aging factor (0.8 here) to find something that suits you.
I would suggest randomly choosing with a weighting factor based on previous sales. So let's say you had this:
tag1 = 1 sale
tag2 = 0 sales
tag3 = 1 sale
tag4 = 2 sales
tag5 = 3 sales
A simple weighting formula would be 1 + number of sales, so this would be the probability of selecting each tag:
tag1 = 2/12 = 16.7%
tag2 = 1/12 = 8.3%
tag3 = 2/12 = 16.6%
tag4 = 3/12 = 25%
tag5 = 4/12 = 33.3%
You could easily change the weighting formula to get just the distribution that you want.
You have to come up with a weighting formula based on sales.
I don't think there's any such thing as a "statistically accurate" formula here - it's all based on your preference.
No one can say "this is the correct weighting and the other weighting is wrong" because there isn't a final outcome you are attempting to model - this isn't like trying to weigh responses to a poll about an upcoming election (where you are trying to model results to represent something that will happen in the future).
Heres an example in javascript. Not that I'm not suggesting running this client side...
Also there is alot of optimization that can be done.
Note: createMemberInNormalDistribution() is implemented here Converting a Uniform Distribution to a Normal Distribution
/*
* an example set of taglines
* hits are sales
* views are times its been shown
*/
var taglines = [
{"tag":"tagline 1","hits":1,"views":234},
{"tag":"tagline 2","hits":5,"views":566},
{"tag":"tagline 3","hits":3,"views":421},
{"tag":"tagline 4","hits":1,"views":120},
{"tag":"tagline 5","hits":7,"views":200}
];
/*set up our stat model for the tags*/
var TagModel = function(set){
var hits, views, sumOfDiff, sumOfSqDiff;
hits = views = sumOfDiff = sumOfSqDiff = 0;
/*find average*/
for (n in set){
hits += set[n].hits;
views += set[n].views;
}
this.avg = hits/views;
/*find standard deviation and variance*/
for (n in set){
var diff =((set[n].hits/set[n].views)-this.avg);
sumOfDiff += diff;
sumOfSqDiff += diff*diff;
}
this.variance = sumOfDiff;
this.std_dev = Math.sqrt(sumOfSqDiff/set.length);
/*return tag to use fChooser determines likelyhood of tag*/
this.getTag = function(fChooser){
var m = this;
set.sort(function(a,b){
return fChooser((a.hits/a.views),(b.hits/b.views), m);
});
return set[0];
};
};
var config = {
"uniformDistribution":function(a,b,model){
return Math.random()*b-Math.random()*a;
},
"normalDistribution":function(a,b,model){
var a1 = createMemberInNormalDistribution(model.avg,model.std_dev)* a;
var b1 = createMemberInNormalDistribution(model.avg,model.std_dev)* b;
return b1-a1;
},
//say weight = 10^n... higher n is the more even the distribution will be.
"weight": .5,
"weightedDistribution":function(a,b,model){
var a1 = createMemberInNormalDistribution(model.avg,model.std_dev*config.weight)* a;
var b1 = createMemberInNormalDistribution(model.avg,model.std_dev*config.weight)* b;
return b1-a1;
}
}
var model = new TagModel(taglines);
//to use
model.getTag(config.uniformDistribution).tag;
//running 10000 times: ({'tagline 4':836, 'tagline 5':7608, 'tagline 1':100, 'tagline 2':924, 'tagline 3':532})
model.getTag(config.normalDistribution).tag;
//running 10000 times: ({'tagline 4':1775, 'tagline 5':3471, 'tagline 1':1273, 'tagline 2':1857, 'tagline 3':1624})
model.getTag(config.weightedDistribution).tag;
//running 10000 times: ({'tagline 4':1514, 'tagline 5':5045, 'tagline 1':577, 'tagline 2':1627, 'tagline 3':1237})
config.weight = 2;
model.getTag(config.weightedDistribution).tag;
//running 10000 times: {'tagline 4':1941, 'tagline 5':2715, 'tagline 1':1559, 'tagline 2':1957, 'tagline 3':1828})

Resources