IoT fault detection algorithm implementation - algorithm

I am attempting to implement a sensor fault detection algorithm from a white paper I found here: http://www.hindawi.com/journals/mpe/2013/712028/ref/
My math skills are decent, but this article does not give great detail on how everything is set up.
My current implementation looks something like the following:
/*******************************************************************************
How this algorithm works:
1) There exists W historical windows which hold the distribution objects (mean, variance)
for that window. Each window is of size m (the sliding window size)
2) There exists a current window which changes every iteration by popping the
oldest value into the buffer window.
3) There exists a buffer window which takes the oldest values from the current window
each iteration. Once the buffer window reaches size m
it then becomes the newest historical window.
*******************************************************************************/
int m = 10; //Statistics sliding window size
float outlierDetectionThreshold; // The outlier detection threshold for sensor s, also called epsilon
List<float> U; // Holds the last 10 windows mean
List<float> V; // Holds the last 10 windows variance
List<float> CurrentWindow; // Holds the last m values
procedure GFD()
do
get a value vi
Detection(vi)
while not end
return
procedure Detection(vi)
init outlierDetectionThreshold
init U and V, loading last m distribution characteristics from DB
init CurrentWindow loading the last m - 1 values
Xi; // What is this?
Tau; // What is this?
Insert vi into CurrentWindow // CurrentWindow now has the m latest values
float CurrentWindowMean = Mean(CurrentWindow)
float CurrentWindowVariance = Variance(CurrentWindow)
if (IsStuck(CurrentWindowVariance) or IsSpikes(vi))
return
If (IsOutlier(vi) and not IsRatStatChagne(vi))
return;
IsRatStatChagne(vi);
return
procedure IsStuck(variance)
if (variance == 0)
return true;
return false;
procedure IsSpike(windowMean, windowVariance, historicalMeans, historicalVariances, xi, tau)
if ( (mean / Mean(historicalMeans)) < xi)
if ( (variance / Mean(historicalVariances)) > tau)
return true;
return false;
procedure IsOutlier(historicalMeans, historicalVariances, outlierDetectionThreshold)
// use historicalMeans and historicalVariances to calculate theta
if (theta > outlierDetectionThreshold)
return true;
I am running into difficulty implementing the IsOutlier and IsRatStatChange functions.
In IsSpike, how are xi and tau calculated, or what do they represent?
For the IsOutlier function, how is theta calculated?
For the IsRatStatStange function, I have not looked into as much yet, but does anyone have a solid grasp to write this?
Any other insights you gleam would be most appreciated.
Thanks in advance.

Related

Calculating sensing range from sensing sensitivity of the device in Castalia?

I am implementing a WSN algorithm in Castalia. I need to calculate sensing range of the sensing device. I know I will need to use the sensing sensitivity parameter but what will be the exact equation?
The answer will vary depending on the behaviour specified by the PhysicalProcess module used. Since you say in your comment that you may be using the CarsPhysicalProcess let's use that as an example.
A sensor reading request initiated by the application is first sent to the SensorManager via a SensorReadingMessage message. In SensorManager.cc you can see how this is processed in its handleMessage function:
...
case SENSOR_READING_MESSAGE: {
SensorReadingMessage *rcvPacket =check_and_cast<SensorReadingMessage*>(msg);
int sensorIndex = rcvPacket->getSensorIndex();
simtime_t currentTime = simTime();
simtime_t interval = currentTime - sensorlastSampleTime[sensorIndex];
int getNewSample = (interval < minSamplingIntervals[sensorIndex]) ? 0 : 1;
if (getNewSample) { //the last request for sample was more than minSamplingIntervals[sensorIndex] time ago
PhysicalProcessMessage *requestMsg =
new PhysicalProcessMessage("sample request", PHYSICAL_PROCESS_SAMPLING);
requestMsg->setSrcID(self); //insert information about the ID of the node
requestMsg->setSensorIndex(sensorIndex); //insert information about the index of the sensor
requestMsg->setXCoor(nodeMobilityModule->getLocation().x);
requestMsg->setYCoor(nodeMobilityModule->getLocation().y);
// send the request to the physical process (using the appropriate
// gate index for the respective sensor device )
send(requestMsg, "toNodeContainerModule", corrPhyProcess[sensorIndex]);
// update the most recent sample times in sensorlastSampleTime[]
sensorlastSampleTime[sensorIndex] = currentTime;
} else { // send back the old sample value
rcvPacket->setSensorType(sensorTypes[sensorIndex].c_str());
rcvPacket->setSensedValue(sensorLastValue[sensorIndex]);
send(rcvPacket, "toApplicationModule");
return;
}
break;
}
....
As you can see, what it's doing is first working out how much time has elapsed since the last sensor reading request for this sensor. If it's less time than specified by the minSamplingInterval possible for this sensor (this is determined by the maxSampleRates NED parameter of the SensorManager), it just returns the last sensor reading given. If it's greater, a new sensor reading is made.
A new sensor reading is made by sending a PhysicalProcessMessage message to the PhysicalProcess module (via the toNodeContainerModule gate). In the message we pass the X and Y coordinates of the node.
Now, if we have specified CarsPhysicalProcess as the physical process to be used in our omnetpp.ini file, the CarsPhysicalProcess module will receive this message. You can see this in CarsPhysicalProcess.cc:
....
case PHYSICAL_PROCESS_SAMPLING: {
PhysicalProcessMessage *phyMsg = check_and_cast < PhysicalProcessMessage * >(msg);
// get the sensed value based on node location
phyMsg->setValue(calculateScenarioReturnValue(
phyMsg->getXCoor(), phyMsg->getYCoor(), phyMsg->getSendingTime()));
// Send reply back to the node who made the request
send(phyMsg, "toNode", phyMsg->getSrcID());
return;
}
...
You can see that we calculate a sensor value based on the X and Y coordinates of the node, and the time at which the sensor reading was made. The response is sent back to the SensorManager via the toNode gate. So we need to look at the calculateScenarioReturnValue function to understand what's going on:
double CarsPhysicalProcess::calculateScenarioReturnValue(const double &x_coo,
const double &y_coo, const simtime_t &stime)
{
double retVal = 0.0f;
int i;
double linear_coeff, distance, x, y;
for (i = 0; i < max_num_cars; i++) {
if (sources_snapshots[i][1].time >= stime) {
linear_coeff = (stime - sources_snapshots[i][0].time) /
(sources_snapshots[i][1].time - sources_snapshots[i][0].time);
x = sources_snapshots[i][0].x + linear_coeff *
(sources_snapshots[i][1].x - sources_snapshots[i][0].x);
y = sources_snapshots[i][0].y + linear_coeff *
(sources_snapshots[i][1].y - sources_snapshots[i][0].y);
distance = sqrt((x_coo - x) * (x_coo - x) +
(y_coo - y) * (y_coo - y));
retVal += pow(K_PARAM * distance + 1, -A_PARAM) * car_value;
}
}
return retVal;
}
We start with a sensor return value of 0. Then we loop over every car that is on the road (if you look at the TIMER_SERVICE case statement in the handleMessage function, you will see that CarsPhysicalProcess puts cars on the road randomly according to the car_interarrival rate, up to a maximum of max_num_cars number of cars). For every car, we calculate how far the car has travelled down the road, and then calculate the distance between the car and the node. Then for each car we add to the return value based on the formula:
pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Where distance is the distance we have calculated between the car and the node, K_PARAM = 0.1, A_PARAM = 1 (defined at the top of CarsPhysicalProcess.cc) and car_value is a number specified in the CarsPhysicalProcess.ned parameter file (default is 30).
This value is passed back to the SensorManager. The SensorManager then may change this value depending on the sensitivity, resolution, noise and bias of the sensor (defined as SensorManager parameters):
....
case PHYSICAL_PROCESS_SAMPLING:
{
PhysicalProcessMessage *phyReply = check_and_cast<PhysicalProcessMessage*>(msg);
int sensorIndex = phyReply->getSensorIndex();
double theValue = phyReply->getValue();
// add the sensor's Bias and the random noise
theValue += sensorBias[sensorIndex];
theValue += normal(0, sensorNoiseSigma[sensorIndex], 1);
// process the limitations of the sensing device (sensitivity, resoultion and saturation)
if (theValue < sensorSensitivity[sensorIndex])
theValue = sensorSensitivity[sensorIndex];
if (theValue > sensorSaturation[sensorIndex])
theValue = sensorSaturation[sensorIndex];
theValue = sensorResolution[sensorIndex] * lrint(theValue / sensorResolution[sensorIndex]);
....
So you can see that if the value is below the sensitivity of the sensor, the floor of the sensitivity is returned.
So basically you can see that there is no specific 'sensing range' in Castalia - it all depends on how the specific PhysicalProcess handles the message. In the case of CarsPhysicalProcess, as long as there is a car on the road, it will always return a value, regardless of the distance - it just might be very small if the car is a long distance away from the node. If the value is very small, you may receive the lowest sensor sensitivity instead. You could increase or decrease the car_value parameter to get a stronger response from the sensor (so this is kind of like a sensor range)
EDIT---
The default sensitivity (which you can find in SensorManager.ned) is 0. Therefore for CarsPhysicalProcess, any car on the road at any distance should be detected and returned as a value greater than 0. In other words, there is an unlimited range. If the car is very, very far away it may return a number so small it becomes truncated to zero (this depends on the limits in precision of a double value in the implementation of c++)
If you wanted to implement a sensing range, you would have to set a value for devicesSensitivity in SensorManager.ned. Then in your application, you would test to see if the returned value is greater than the sensitivity value - if it is, the car is 'in range', if it is (almost) equal to the sensitivity it is out of range. I say almost because (as we have seen earlier) the SensorManager adds noise to the value returned, so for example if you have a sensitivity value of 5, and no cars, you will get values which will hover slightly around 5 (e.g. 5.0001, 4.99)
With a sensitivity value set, to calculate the sensing range (assuming only 1 car on the road), this means simply solving the equation above for distance, using the minimum sensitivity value as the returned value. i.e. if we use a sensitivity value of 5:
5 = pow(K_PARAM * distance + 1, -A_PARAM) * car_value
Substitute values for the parameters, and use algebra to solve for distance.

Add water between in a bar chart

Recently came across an interview question in glassdoor-like site and I can't find an optimized solution to solve this problem:
This is nothing like trapping water problem. Please read through the examples.
Given an input array whose each element represents the height of towers, the amount of water will be poured and the index number indicates the pouring water position.The width of every tower is 1. Print the graph after pouring water.
Notes:
Use * to indicate the tower, w to represent 1 amount water.
The pouring position will never at the peak position.No need to consider the divide water case.
(A Bonus point if you gave a solution for this case, you may assume that if Pouring N water at peak position, N/2 water goes to left, N/2 water goes to right.)
The definition for a peak: the height of peak position is greater than the both left and right index next to it.)
Assume there are 2 extreme high walls sits close to the histogram.
So if the water amount is over the capacity of the histogram,
you should indicate the capacity number and keep going. See Example 2.
Assume the water would go left first, see Example 1
Example 1:
int[] heights = {4,2,1,2,3,2,1,0,4,2,1}
It look like:
* *
* * **
** *** **
******* ***
+++++++++++ <- there'll always be a base layer
42123210431
Assume given this heights array, water amout 3, position 2:
Print:
* *
*ww * **
**w*** **
******* ***
+++++++++++
Example 2:
int[] heights = {4,2,1,2,3,2,1,0,4,2,1}, water amout 32, position 2
Print:
capacity:21
wwwwwwwwwww
*wwwwwww*ww
*www*www**w
**w***ww**w
*******w***
+++++++++++
At first I though it's like the trapping water problem but I was wrong. Does anyone have an algorithm to solve this problem?
An explanation or comments in the code would be welcomed.
Note:
The trapping water problem is asked for the capacity, but this question introduced two variables: water amount and the pouring index. Besides, the water has the flowing preference. So it not like trapping water problem.
I found a Python solution to this question. However, I'm not familiar with Python so I quote the code here. Hopefully, someone knows Python could help.
Code by #z026
def pour_water(terrains, location, water):
print 'location', location
print 'len terrains', len(terrains)
waters = [0] * len(terrains)
while water > 0:
left = location - 1
while left >= 0:
if terrains[left] + waters[left] > terrains[left + 1] + waters[left + 1]:
break
left -= 1
if terrains[left + 1] + waters[left + 1] < terrains[location] + waters[location]:
location_to_pour = left + 1
print 'set by left', location_to_pour
else:
right = location + 1
while right < len(terrains):
if terrains[right] + waters[right] > terrains[right - 1] + waters[right - 1]:
print 'break, right: {}, right - 1:{}'.format(right, right - 1)
break
right += 1
if terrains[right - 1] + waters[right - 1] < terrains[location] + waters[right - 1]:
location_to_pour = right - 1
print 'set by right', location_to_pour
else:
location_to_pour = location
print 'set to location', location_to_pour
waters[location_to_pour] += 1
print location_to_pour
water -= 1
max_height = max(terrains)
for height in xrange(max_height, -1, -1):
for i in xrange(len(terrains)):
if terrains + waters < height:
print ' ',
elif terrains < height <= terrains + waters:
print 'w',
else:
print '+',
print ''
Since you have to generate and print out the array anyway, I'd probably opt for a recursive approach keeping to the O(rows*columns) complexity. Note each cell can be "visited" at most twice.
On a high level: first recurse down, then left, then right, then fill the current cell.
However, this runs into a little problem: (assuming this is a problem)
*w * * *
**ww* * instead of **ww*w*
This can be fixed by updating the algorithm to go left and right first to fill cells below the current row, then to go both left and right again to fill the current row. Let's say state = v means we came from above, state = h1 means it's the first horizontal pass, state = h2 means it's the second horizontal pass.
You might be able to avoid this repeated visiting of cells by using a stack, but it's more complex.
Pseudo-code:
array[][] // populated with towers, as shown in the question
visited[][] // starts with all false
// call at the position you're inserting water (at the very top)
define fill(x, y, state):
if x or y out of bounds
or array[x][y] == '*'
or waterCount == 0
return
visited = true
// we came from above
if state == v
fill(x, y+1, v) // down
fill(x-1, y, h1) // left , 1st pass
fill(x+1, y, h1) // right, 1st pass
fill(x-1, y, h2) // left , 2nd pass
fill(x+1, y, h2) // right, 2nd pass
// this is a 1st horizontal pass
if state == h1
fill(x, y+1, v) // down
fill(x-1, y, h1) // left , 1st pass
fill(x+1, y, h1) // right, 1st pass
visited = false // need to revisit cell later
return // skip filling the current cell
// this is a 2nd horizontal pass
if state == h2
fill(x-1, y, h2) // left , 2nd pass
fill(x+1, y, h2) // right, 2nd pass
// fill current cell
if waterCount > 0
array[x][y] = 'w'
waterCount--
You have an array height with the height of the terrain in each column, so I would create a copy of this array (let's call it w for water) to indicate how high the water is in each column. Like this you also get rid of the problem not knowing how many rows to initialize when transforming into a grid and you can skip that step entirely.
The algorithm in Java code would look something like this:
public int[] getWaterHeight(int index, int drops, int[] heights) {
int[] w = Arrays.copyOf(heights);
for (; drops > 0; drops--) {
int idx = index;
// go left first
while (idx > 0 && w[idx - 1] <= w[idx])
idx--;
// go right
for (;;) {
int t = idx + 1;
while (t < w.length && w[t] == w[idx])
t++;
if (t >= w.length || w[t] >= w[idx]) {
w[idx]++;
break;
} else { // we can go down to the right side here
idx = t;
}
}
}
return w;
}
Even though there are many loops, the complexity is only O(drops * columns). If you expect huge amount of drops then it could be wise to count the number of empty spaces in regard to the highest terrain point O(columns), then if the number of drops exceeds the free spaces, the calculation of the column heights becomes trivial O(1), however setting them all still takes O(columns).
You can iterate over the 2D grid from bottom to top, create a node for every horizontal run of connected cells, and then string these nodes together into a linked list that represents the order in which the cells are filled.
After row one, you have one horizontal run, with a volume of 1:
1(1)
In row two, you find three runs, one of which is connected to node 1:
1(1)->2(1) 3(1) 4(1)
In row three, you find three runs, one of which connects runs 2 and 3; run 3 is closest to the column where the water is added, so it comes first:
3(1)->1(1)->2(1)->5(3) 6(1) 4(1)->7(1)
In row four you find two runs, one of which connects runs 6 and 7; run 6 is closest to the column where the water is added, so it comes first:
3(1)->1(1)->2(1)->5(3)->8(4) 6(1)->4(1)->7(1)->9(3)
In row five you find a run which connects runs 8 and 9; they are on opposite sides of the column where the water is added, so the run on the left goes first:
3(1)->1(1)->2(1)->5(3)->8(4)->6(1)->4(1)->7(1)->9(3)->A(8)
Run A combines all the columns, so it becomes the last node and is given infinite volume; any excess drops will simply be stacked up:
3(1)->1(1)->2(1)->5(3)->8(4)->6(1)->4(1)->7(1)->9(3)->A(infinite)
then we fill the runs in the order in which they are listed, until we run out of drops.
Thats my 20 minutes solution. Each drop is telling the client where it will stay, so the difficult task is done.(Copy-Paste in your IDE) Only the printing have to be done now, but the drops are taking their position. Take a look:
class Test2{
private static int[] heights = {3,4,4,4,3,2,1,0,4,2,1};
public static void main(String args[]){
int wAmount = 10;
int position = 2;
for(int i=0; i<wAmount; i++){
System.out.println(i+"#drop");
aDropLeft(position);
}
}
private static void aDropLeft(int position){
getHight(position);
int canFallTo = getFallPositionLeft(position);
if(canFallTo==-1){canFallTo = getFallPositionRight(position);}
if(canFallTo==-1){
stayThere(position);
return;
}
aDropLeft(canFallTo);
}
private static void stayThere(int position) {
System.out.print("Staying at: ");log(position);
heights[position]++;
}
//the position or -1 if it cant fall
private static int getFallPositionLeft(int position) {
int tempHeight = getHight(position);
int tempPosition = position;
//check left , if no, then check right
while(tempPosition>0){
if(tempHeight>getHight(tempPosition-1)){
return tempPosition-1;
}else tempPosition--;
}
return -1;
}
private static int getFallPositionRight(int position) {
int tempHeight = getHight(position);
int tempPosition = position;
while(tempPosition<heights.length-1){
if(tempHeight>getHight(tempPosition+1)){
return tempPosition+1;
}else if(tempHeight<getHight(tempPosition+1)){
return -1;
}else tempPosition++;
}
return -1;
}
private static int getHight(int position) {
return heights[position];
}
private static void log(int position) {
System.out.println("I am at position: " + position + " height: " + getHight(position));
}
}
Of course the code can be optimized, but thats my straightforward solution
l=[0,1,0,2,1,0,1,3,2,1,2,1]
def findwater(l):
w=0
for i in range(0,len(l)-1):
if i==0:
pass
else:
num = min(max(l[:i]),max(l[i:]))-l[i]
if num>0:
w+=num
return w
col_names=[1,2,3,4,5,6,7,8,9,10,11,12,13] #for visualization
bars=[4,0,2,0,1,0,4,0,5,0,3,0,1]
pd.DataFrame(dict(zip(col_names,bars)),index=range(1)).plot(kind='bar') # Plotting bars
def measure_water(l):
water=0
for i in range(len(l)-1): # iterate over bars (list)
if i==0: # case to avoid max(:i) situation in case no item on left
pass
else:
vol_at_curr_bar=min(max(l[:i]),max(l[i:]))-l[i] #select min of max heighted bar on both side and minus current height
if vol_at_curr_bar>0: # case to aviod any negative sum
water+=vol_at_curr_bar
return water
measure_water(bars)

Memory and excecution speed in Matlab

I am trying to create random lines and select some of them, which are really rare. My code is rather simple, but to get something that I can use I need to create very large vectors(i.e.: <100000000 x 1, tracks variable in my code). Is there any way to be able to creater larger vectors and to reduce the time needed for all those calculations?
My code is
%Initial line values
tracks=input('Give me the number of muon tracks: ');
width=1e-4;
height=2e-4;
Ystart=15.*ones(tracks,1);
Xstart=-40+80.*rand(tracks,1);
%Xend=-40+80.*rand(tracks,1);
Xend=laprnd(tracks,1,Xstart,15);
X=[Xstart';Xend'];
Y=[Ystart';zeros(1,tracks)];
b=(Ystart.*Xend)./(Xend-Xstart);
hot=0;
cold=0;
for i=1:tracks
if ((Xend(i,1)<width/2 && Xend(i,1)>-width/2)||(b(i,1)<height && b(i,1)>0))
plot(X(:, i),Y(:, i),'r');%the chosen ones!
hold all
hot=hot+1;
else
%plot(X(:, i),Y(:, i),'b');%the rest of them
%hold all
cold=cold+1;
end
end
I am also using and calling a Laplace distribution generator made my Elvis Chen which can be found here
function y = laprnd(m, n, mu, sigma)
%LAPRND generate i.i.d. laplacian random number drawn from laplacian distribution
% with mean mu and standard deviation sigma.
% mu : mean
% sigma : standard deviation
% [m, n] : the dimension of y.
% Default mu = 0, sigma = 1.
% For more information, refer to
% http://en.wikipedia.org./wiki/Laplace_distribution
% Author : Elvis Chen (bee33#sjtu.edu.cn)
% Date : 01/19/07
%Check inputs
if nargin < 2
error('At least two inputs are required');
end
if nargin == 2
mu = 0; sigma = 1;
end
if nargin == 3
sigma = 1;
end
% Generate Laplacian noise
u = rand(m, n)-0.5;
b = sigma / sqrt(2);
y = mu - b * sign(u).* log(1- 2* abs(u));
The result plot is
As you indicate, your problem is two-fold. On the one hand, you have memory issues because you need to do so many trials. On the other hand, you have performance issues, because you have to process all those trials.
Solutions to each issue often have a negative impact on the other issue. IMHO, the best approach would be to find a compromise.
More trials are only possible of you get rid of those gargantuan arrays that are required for vectorization, and use a different strategy to do the loop. I will give priority to the possibility of using more trials, possibly at the cost of optimal performance.
When I execute your code as-is in the Matlab profiler, it immediately shows that the initial memory allocation for all your variables takes a lot of time. It also shows that the plot and hold all commands are the most time-consuming lines of them all. Some more trial-and-error shows that there is a disappointingly low maximum value for the trials you can do before OUT OF MEMORY errors start appearing.
The loop can be accelerated tremendously if you know a few things about its limitations in Matlab. In older versions of Matlab, it used to be true that loops should be avoided completely in favor of 'vectorized' code. In recent versions (I believe R2008a and up), the Mathworks introduced a piece of technology called the JIT accelerator (Just-in-Time compiler) which translates M-code into machine language on the fly during execution. Simply put, the JIT accelerator allows your code to bypass Matlab's interpreter and talk much more directly with the underlying hardware, which can save a lot of time.
The advice you'll hear a lot that loops should be avoided in Matlab, is no longer generally true. While vectorization still has its value, any procedure of sizable complexity that is implemented using only vectorized code is often illegible, hard to understand, hard to change and hard to upkeep. An implementation of the same procedure that uses loops, often has none of these drawbacks, and moreover, it will quite often be faster and require less memory.
Unfortunately, the JIT accelerator has a few nasty (and IMHO, unnecessary) limitations that you'll have to learn about.
One such thing is plot; it's generally a better idea to let a loop do nothing other than collect and manipulate data, and delay any plotting commands etc. until after the loop.
Another such thing is hold; the hold function is not a Matlab built-in function, meaning, it is implemented in M-language. Matlab's JIT accelerator is not able to accelerate non-builtin functions when used in a loop, meaning, your entire loop will run at Matlab's interpretation speed, rather than machine-language speed! Therefore, also delay this command until after the loop :)
Now, in case you're wondering, this last step can make a HUGE difference -- I know of one case where copy-pasting a function body into the upper-level loop caused a 1200x performance improvement. Days of execution time had been reduced to minutes!).
There is actually another minor issue in your loop (which is really small, and rather inconvenient, I will immediately agree with) -- the name of the loop variable should not be i. The name i is the name of the imaginary unit in Matlab, and the name resolution will also unnecessarily consume time on each iteration. It's small, but non-negligible.
Now, considering all this, I've come to the following implementation:
function [hot, cold, h] = MuonTracks(tracks)
% NOTE: no variables larger than 1x1 are initialized
width = 1e-4;
height = 2e-4;
% constant used for Laplacian noise distribution
bL = 15 / sqrt(2);
% Loop through all tracks
X = [];
hot = 0;
ii = 0;
while ii <= tracks
ii = ii + 1;
% Note that I've inlined (== copy-pasted) the original laprnd()
% function call. This was necessary to work around limitations
% in loops in Matlab, and prevent the nececessity of those HUGE
% variables.
%
% Of course, you can still easily generalize all of this:
% the new data
u = rand-0.5;
Ystart = 15;
Xstart = 800*rand-400;
Xend = Xstart - bL*sign(u)*log(1-2*abs(u));
b = (Ystart*Xend)/(Xend-Xstart);
% the test
if ((b < height && b > 0)) ||...
(Xend < width/2 && Xend > -width/2)
hot = hot+1;
% growing an array is perfectly fine when the chances of it
% happening are so slim
X = [X [Xstart; Xend]]; %#ok
end
end
% This is trivial to do here, and prevents an 'else' in the loop
cold = tracks - hot;
% Now plot the chosen ones
h = figure;
hold all
Y = repmat([15;0], 1, size(X,2));
plot(X, Y, 'r');
end
With this implementation, I can do this:
>> tic, MuonTracks(1e8); toc
Elapsed time is 24.738725 seconds.
with a completely negligible memory footprint.
The profiler now also shows a nice and even distribution of effort along the code; no lines that really stand out because of their memory use or performance.
It's possibly not the fastest possible implementation (if anyone sees obvious improvements, please, feel free to edit them in). But, if you're willing to wait, you'll be able to do MuonTracks(1e23) (or higher :)
I've also done an implementation in C, which can be compiled into a Matlab MEX file:
/* DoMuonCounting.c */
#include <math.h>
#include <matrix.h>
#include <mex.h>
#include <time.h>
#include <stdlib.h>
void CountMuons(
unsigned long long tracks,
unsigned long long *hot, unsigned long long *cold, double *Xout);
/* simple little helper functions */
double sign(double x) { return (x>0)-(x<0); }
double rand_double() { return (double)rand()/(double)RAND_MAX; }
/* the gateway function */
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
int
dims[] = {1,1};
const mxArray
/* Output arguments */
*hot_out = plhs[0] = mxCreateNumericArray(2,dims, mxUINT64_CLASS,0),
*cold_out = plhs[1] = mxCreateNumericArray(2,dims, mxUINT64_CLASS,0),
*X_out = plhs[2] = mxCreateDoubleMatrix(2,10000, mxREAL);
const unsigned long long
tracks = (const unsigned long long)mxGetPr(prhs[0])[0];
unsigned long long
*hot = (unsigned long long*)mxGetPr(hot_out),
*cold = (unsigned long long*)mxGetPr(cold_out);
double
*Xout = mxGetPr(X_out);
/* call the actual function, and return */
CountMuons(tracks, hot,cold, Xout);
}
// The actual muon counting
void CountMuons(
unsigned long long tracks,
unsigned long long *hot, unsigned long long *cold, double *Xout)
{
const double
width = 1.0e-4,
height = 2.0e-4,
bL = 15.0/sqrt(2.0),
Ystart = 15.0;
double
Xstart,
Xend,
u,
b;
unsigned long long
i = 0ul;
*hot = 0ul;
*cold = tracks;
/* seed the RNG */
srand((unsigned)time(NULL));
/* aaaand start! */
while (i++ < tracks)
{
u = rand_double() - 0.5;
Xstart = 800.0*rand_double() - 400.0;
Xend = Xstart - bL*sign(u)*log(1.0-2.0*fabs(u));
b = (Ystart*Xend)/(Xend-Xstart);
if ((b < height && b > 0.0) || (Xend < width/2.0 && Xend > -width/2.0))
{
Xout[0 + *hot*2] = Xstart;
Xout[1 + *hot*2] = Xend;
++(*hot);
--(*cold);
}
}
}
compile in Matlab with
mex DoMuonCounting.c
(after having run mex setup :) and then use it in conjunction with a small M-wrapper like this:
function [hot,cold, h] = MuonTrack2(tracks)
% call the MEX function
[hot,cold, Xtmp] = DoMuonCounting(tracks);
% process outputs, and generate plots
hot = uint32(hot); % circumvents limitations in 32-bit matlab
X = Xtmp(:,1:hot);
clear Xtmp
h = NaN;
if ~isempty(X)
h = figure;
hold all
Y = repmat([15;0], 1, hot);
plot(X, Y, 'r');
end
end
which allows me to do
>> tic, MuonTrack2(1e8); toc
Elapsed time is 14.496355 seconds.
Note that the memory footprint of the MEX version is slightly larger, but I think that's nothing to worry about.
The only flaw I see is the fixed maximum number of Muon counts (hard-coded as 10000 as the initial array size of Xout; needed because there are no dynamically growing arrays in standard C)...if you're worried this limit could be broken, simply increase it, change it to be equal to a fraction of tracks, or do some smarter (but more painful) dynamic array-growing tricks.
In Matlab, it is sometimes faster to vectorize rather than use a for loop. For example, this expression:
(Xend(i,1) < width/2 && Xend(i,1) > -width/2) || (b(i,1) < height && b(i,1) > 0)
which is defined for each value of i, can be rewritten in a vectorised manner like this:
isChosen = (Xend(:,1) < width/2 & Xend(:,1) > -width/2) | (b(:,1) < height & b(:,1)>0)
Expessions like Xend(:,1) will give you a column vector, so Xend(:,1) < width/2 will give you a column vector of boolean values. Note then that I have used & rather than && - this is because & performs an element-wise logical AND, unlike && which only works on scalar values. In this way you can build the entire expression, such that the variable isChosen holds a column vector of boolean values, one for each row of your Xend/b vectors.
Getting counts is now as simple as this:
hot = sum(isChosen);
since true is represented by 1. And:
cold = sum(~isChosen);
Finally, you can get the data points by using the boolean vector to select rows:
plot(X(:, isChosen),Y(:, isChosen),'r'); % Plot chosen values
hold all;
plot(X(:, ~isChosen),Y(:, ~isChosen),'b'); % Plot unchosen values
EDIT: The code should look like this:
isChosen = (Xend(:,1) < width/2 & Xend(:,1) > -width/2) | (b(:,1) < height & b(:,1)>0);
hot = sum(isChosen);
cold = sum(~isChosen);
plot(X(:, isChosen),Y(:, isChosen),'r'); % Plot chosen values

Calculating frames per second in a game

What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast.
Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.
You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer.
// eg.
float smoothing = 0.9; // larger=more smoothing
measurement = (measurement * smoothing) + (current * (1.0-smoothing))
By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!
This is what I have used in many games.
#define MAXSAMPLES 100
int tickindex=0;
int ticksum=0;
int ticklist[MAXSAMPLES];
/* need to zero out the ticklist array before starting */
/* average will ramp up until the buffer is full */
/* returns average ticks per frame over the MAXSAMPLES last frames */
double CalcAverageTick(int newtick)
{
ticksum-=ticklist[tickindex]; /* subtract value falling off */
ticksum+=newtick; /* add new value */
ticklist[tickindex]=newtick; /* save new value so it can be subtracted later */
if(++tickindex==MAXSAMPLES) /* inc buffer index */
tickindex=0;
/* return average */
return((double)ticksum/MAXSAMPLES);
}
Well, certainly
frames / sec = 1 / (sec / frame)
But, as you point out, there's a lot of variation in the time it takes to render a single frame, and from a UI perspective updating the fps value at the frame rate is not usable at all (unless the number is very stable).
What you want is probably a moving average or some sort of binning / resetting counter.
For example, you could maintain a queue data structure which held the rendering times for each of the last 30, 60, 100, or what-have-you frames (you could even design it so the limit was adjustable at run-time). To determine a decent fps approximation you can determine the average fps from all the rendering times in the queue:
fps = # of rendering times in queue / total rendering time
When you finish rendering a new frame you enqueue a new rendering time and dequeue an old rendering time. Alternately, you could dequeue only when the total of the rendering times exceeded some preset value (e.g. 1 sec). You can maintain the "last fps value" and a last updated timestamp so you can trigger when to update the fps figure, if you so desire. Though with a moving average if you have consistent formatting, printing the "instantaneous average" fps on each frame would probably be ok.
Another method would be to have a resetting counter. Maintain a precise (millisecond) timestamp, a frame counter, and an fps value. When you finish rendering a frame, increment the counter. When the counter hits a pre-set limit (e.g. 100 frames) or when the time since the timestamp has passed some pre-set value (e.g. 1 sec), calculate the fps:
fps = # frames / (current time - start time)
Then reset the counter to 0 and set the timestamp to the current time.
Increment a counter every time you render a screen and clear that counter for some time interval over which you want to measure the frame-rate.
Ie. Every 3 seconds, get counter/3 and then clear the counter.
There are at least two ways to do it:
The first is the one others have mentioned here before me.
I think it's the simplest and preferred way. You just to keep track of
cn: counter of how many frames you've rendered
time_start: the time since you've started counting
time_now: the current time
Calculating the fps in this case is as simple as evaluating this formula:
FPS = cn / (time_now - time_start).
Then there is the uber cool way you might like to use some day:
Let's say you have 'i' frames to consider. I'll use this notation: f[0], f[1],..., f[i-1] to describe how long it took to render frame 0, frame 1, ..., frame (i-1) respectively.
Example where i = 3
|f[0] |f[1] |f[2] |
+----------+-------------+-------+------> time
Then, mathematical definition of fps after i frames would be
(1) fps[i] = i / (f[0] + ... + f[i-1])
And the same formula but only considering i-1 frames.
(2) fps[i-1] = (i-1) / (f[0] + ... + f[i-2])
Now the trick here is to modify the right side of formula (1) in such a way that it will contain the right side of formula (2) and substitute it for it's left side.
Like so (you should see it more clearly if you write it on a paper):
fps[i] = i / (f[0] + ... + f[i-1])
= i / ((f[0] + ... + f[i-2]) + f[i-1])
= (i/(i-1)) / ((f[0] + ... + f[i-2])/(i-1) + f[i-1]/(i-1))
= (i/(i-1)) / (1/fps[i-1] + f[i-1]/(i-1))
= ...
= (i*fps[i-1]) / (f[i-1] * fps[i-1] + i - 1)
So according to this formula (my math deriving skill are a bit rusty though), to calculate the new fps you need to know the fps from the previous frame, the duration it took to render the last frame and the number of frames you've rendered.
This might be overkill for most people, that's why I hadn't posted it when I implemented it. But it's very robust and flexible.
It stores a Queue with the last frame times, so it can accurately calculate an average FPS value much better than just taking the last frame into consideration.
It also allows you to ignore one frame, if you are doing something that you know is going to artificially screw up that frame's time.
It also allows you to change the number of frames to store in the Queue as it runs, so you can test it out on the fly what is the best value for you.
// Number of past frames to use for FPS smooth calculation - because
// Unity's smoothedDeltaTime, well - it kinda sucks
private int frameTimesSize = 60;
// A Queue is the perfect data structure for the smoothed FPS task;
// new values in, old values out
private Queue<float> frameTimes;
// Not really needed, but used for faster updating then processing
// the entire queue every frame
private float __frameTimesSum = 0;
// Flag to ignore the next frame when performing a heavy one-time operation
// (like changing resolution)
private bool _fpsIgnoreNextFrame = false;
//=============================================================================
// Call this after doing a heavy operation that will screw up with FPS calculation
void FPSIgnoreNextFrame() {
this._fpsIgnoreNextFrame = true;
}
//=============================================================================
// Smoothed FPS counter updating
void Update()
{
if (this._fpsIgnoreNextFrame) {
this._fpsIgnoreNextFrame = false;
return;
}
// While looping here allows the frameTimesSize member to be changed dinamically
while (this.frameTimes.Count >= this.frameTimesSize) {
this.__frameTimesSum -= this.frameTimes.Dequeue();
}
while (this.frameTimes.Count < this.frameTimesSize) {
this.__frameTimesSum += Time.deltaTime;
this.frameTimes.Enqueue(Time.deltaTime);
}
}
//=============================================================================
// Public function to get smoothed FPS values
public int GetSmoothedFPS() {
return (int)(this.frameTimesSize / this.__frameTimesSum * Time.timeScale);
}
Good answers here. Just how you implement it is dependent on what you need it for. I prefer the running average one myself "time = time * 0.9 + last_frame * 0.1" by the guy above.
however I personally like to weight my average more heavily towards newer data because in a game it is SPIKES that are the hardest to squash and thus of most interest to me. So I would use something more like a .7 \ .3 split will make a spike show up much faster (though it's effect will drop off-screen faster as well.. see below)
If your focus is on RENDERING time, then the .9.1 split works pretty nicely b/c it tend to be more smooth. THough for gameplay/AI/physics spikes are much more of a concern as THAT will usually what makes your game look choppy (which is often worse than a low frame rate assuming we're not dipping below 20 fps)
So, what I would do is also add something like this:
#define ONE_OVER_FPS (1.0f/60.0f)
static float g_SpikeGuardBreakpoint = 3.0f * ONE_OVER_FPS;
if(time > g_SpikeGuardBreakpoint)
DoInternalBreakpoint()
(fill in 3.0f with whatever magnitude you find to be an unacceptable spike)
This will let you find and thus solve FPS issues the end of the frame they happen.
A much better system than using a large array of old framerates is to just do something like this:
new_fps = old_fps * 0.99 + new_fps * 0.01
This method uses far less memory, requires far less code, and places more importance upon recent framerates than old framerates while still smoothing the effects of sudden framerate changes.
You could keep a counter, increment it after each frame is rendered, then reset the counter when you are on a new second (storing the previous value as the last second's # of frames rendered)
JavaScript:
// Set the end and start times
var start = (new Date).getTime(), end, FPS;
/* ...
* the loop/block your want to watch
* ...
*/
end = (new Date).getTime();
// since the times are by millisecond, use 1000 (1000ms = 1s)
// then multiply the result by (MaxFPS / 1000)
// FPS = (1000 - (end - start)) * (MaxFPS / 1000)
FPS = Math.round((1000 - (end - start)) * (60 / 1000));
Here's a complete example, using Python (but easily adapted to any language). It uses the smoothing equation in Martin's answer, so almost no memory overhead, and I chose values that worked for me (feel free to play around with the constants to adapt to your use case).
import time
SMOOTHING_FACTOR = 0.99
MAX_FPS = 10000
avg_fps = -1
last_tick = time.time()
while True:
# <Do your rendering work here...>
current_tick = time.time()
# Ensure we don't get crazy large frame rates, by capping to MAX_FPS
current_fps = 1.0 / max(current_tick - last_tick, 1.0/MAX_FPS)
last_tick = current_tick
if avg_fps < 0:
avg_fps = current_fps
else:
avg_fps = (avg_fps * SMOOTHING_FACTOR) + (current_fps * (1-SMOOTHING_FACTOR))
print(avg_fps)
Set counter to zero. Each time you draw a frame increment the counter. After each second print the counter. lather, rinse, repeat. If yo want extra credit, keep a running counter and divide by the total number of seconds for a running average.
In (c++ like) pseudocode these two are what I used in industrial image processing applications that had to process images from a set of externally triggered camera's. Variations in "frame rate" had a different source (slower or faster production on the belt) but the problem is the same. (I assume that you have a simple timer.peek() call that gives you something like the nr of msec (nsec?) since application start or the last call)
Solution 1: fast but not updated every frame
do while (1)
{
ProcessImage(frame)
if (frame.framenumber%poll_interval==0)
{
new_time=timer.peek()
framerate=poll_interval/(new_time - last_time)
last_time=new_time
}
}
Solution 2: updated every frame, requires more memory and CPU
do while (1)
{
ProcessImage(frame)
new_time=timer.peek()
delta=new_time - last_time
last_time = new_time
total_time += delta
delta_history.push(delta)
framerate= delta_history.length() / total_time
while (delta_history.length() > avg_interval)
{
oldest_delta = delta_history.pop()
total_time -= oldest_delta
}
}
qx.Class.define('FpsCounter', {
extend: qx.core.Object
,properties: {
}
,events: {
}
,construct: function(){
this.base(arguments);
this.restart();
}
,statics: {
}
,members: {
restart: function(){
this.__frames = [];
}
,addFrame: function(){
this.__frames.push(new Date());
}
,getFps: function(averageFrames){
debugger;
if(!averageFrames){
averageFrames = 2;
}
var time = 0;
var l = this.__frames.length;
var i = averageFrames;
while(i > 0){
if(l - i - 1 >= 0){
time += this.__frames[l - i] - this.__frames[l - i - 1];
}
i--;
}
var fps = averageFrames / time * 1000;
return fps;
}
}
});
How i do it!
boolean run = false;
int ticks = 0;
long tickstart;
int fps;
public void loop()
{
if(this.ticks==0)
{
this.tickstart = System.currentTimeMillis();
}
this.ticks++;
this.fps = (int)this.ticks / (System.currentTimeMillis()-this.tickstart);
}
In words, a tick clock tracks ticks. If it is the first time, it takes the current time and puts it in 'tickstart'. After the first tick, it makes the variable 'fps' equal how many ticks of the tick clock divided by the time minus the time of the first tick.
Fps is an integer, hence "(int)".
Here's how I do it (in Java):
private static long ONE_SECOND = 1000000L * 1000L; //1 second is 1000ms which is 1000000ns
LinkedList<Long> frames = new LinkedList<>(); //List of frames within 1 second
public int calcFPS(){
long time = System.nanoTime(); //Current time in nano seconds
frames.add(time); //Add this frame to the list
while(true){
long f = frames.getFirst(); //Look at the first element in frames
if(time - f > ONE_SECOND){ //If it was more than 1 second ago
frames.remove(); //Remove it from the list of frames
} else break;
/*If it was within 1 second we know that all other frames in the list
* are also within 1 second
*/
}
return frames.size(); //Return the size of the list
}
In Typescript, I use this algorithm to calculate framerate and frametime averages:
let getTime = () => {
return new Date().getTime();
}
let frames: any[] = [];
let previousTime = getTime();
let framerate:number = 0;
let frametime:number = 0;
let updateStats = (samples:number=60) => {
samples = Math.max(samples, 1) >> 0;
if (frames.length === samples) {
let currentTime: number = getTime() - previousTime;
frametime = currentTime / samples;
framerate = 1000 * samples / currentTime;
previousTime = getTime();
frames = [];
}
frames.push(1);
}
usage:
statsUpdate();
// Print
stats.innerHTML = Math.round(framerate) + ' FPS ' + frametime.toFixed(2) + ' ms';
Tip: If samples is 1, the result is real-time framerate and frametime.
This is based on KPexEA's answer and gives the Simple Moving Average. Tidied and converted to TypeScript for easy copy and paste:
Variable declaration:
fpsObject = {
maxSamples: 100,
tickIndex: 0,
tickSum: 0,
tickList: []
}
Function:
calculateFps(currentFps: number): number {
this.fpsObject.tickSum -= this.fpsObject.tickList[this.fpsObject.tickIndex] || 0
this.fpsObject.tickSum += currentFps
this.fpsObject.tickList[this.fpsObject.tickIndex] = currentFps
if (++this.fpsObject.tickIndex === this.fpsObject.maxSamples) this.fpsObject.tickIndex = 0
const smoothedFps = this.fpsObject.tickSum / this.fpsObject.maxSamples
return Math.floor(smoothedFps)
}
Usage (may vary in your app):
this.fps = this.calculateFps(this.ticker.FPS)
I adapted #KPexEA's answer to Go, moved the globals into struct fields, allowed the number of samples to be configurable, and used time.Duration instead of plain integers and floats.
type FrameTimeTracker struct {
samples []time.Duration
sum time.Duration
index int
}
func NewFrameTimeTracker(n int) *FrameTimeTracker {
return &FrameTimeTracker{
samples: make([]time.Duration, n),
}
}
func (t *FrameTimeTracker) AddFrameTime(frameTime time.Duration) (average time.Duration) {
// algorithm adapted from https://stackoverflow.com/a/87732/814422
t.sum -= t.samples[t.index]
t.sum += frameTime
t.samples[t.index] = frameTime
t.index++
if t.index == len(t.samples) {
t.index = 0
}
return t.sum / time.Duration(len(t.samples))
}
The use of time.Duration, which has nanosecond precision, eliminates the need for floating-point arithmetic to compute the average frame time, but comes at the expense of needing twice as much memory for the same number of samples.
You'd use it like this:
// track the last 60 frame times
frameTimeTracker := NewFrameTimeTracker(60)
// main game loop
for frame := 0;; frame++ {
// ...
if frame > 0 {
// prevFrameTime is the duration of the last frame
avgFrameTime := frameTimeTracker.AddFrameTime(prevFrameTime)
fps := 1.0 / avgFrameTime.Seconds()
}
// ...
}
Since the context of this question is game programming, I'll add some more notes about performance and optimization. The above approach is idiomatic Go but always involves two heap allocations: one for the struct itself and one for the array backing the slice of samples. If used as indicated above, these are long-lived allocations so they won't really tax the garbage collector. Profile before optimizing, as always.
However, if performance is a major concern, some changes can be made to eliminate the allocations and indirections:
Change samples from a slice of []time.Duration to an array of [N]time.Duration where N is fixed at compile time. This removes the flexibility of changing the number of samples at runtime, but in most cases that flexibility is unnecessary.
Then, eliminate the NewFrameTimeTracker constructor function entirely and use a var frameTimeTracker FrameTimeTracker declaration (at the package level or local to main) instead. Unlike C, Go will pre-zero all relevant memory.
Unfortunately, most of the answers here don't provide either accurate enough or sufficiently "slow responsive" FPS measurements. Here's how I do it in Rust using a measurement queue:
use std::collections::VecDeque;
use std::time::{Duration, Instant};
pub struct FpsCounter {
sample_period: Duration,
max_samples: usize,
creation_time: Instant,
frame_count: usize,
measurements: VecDeque<FrameCountMeasurement>,
}
#[derive(Copy, Clone)]
struct FrameCountMeasurement {
time: Instant,
frame_count: usize,
}
impl FpsCounter {
pub fn new(sample_period: Duration, samples: usize) -> Self {
assert!(samples > 1);
Self {
sample_period,
max_samples: samples,
creation_time: Instant::now(),
frame_count: 0,
measurements: VecDeque::new(),
}
}
pub fn fps(&self) -> f32 {
match (self.measurements.front(), self.measurements.back()) {
(Some(start), Some(end)) => {
let period = (end.time - start.time).as_secs_f32();
if period > 0.0 {
(end.frame_count - start.frame_count) as f32 / period
} else {
0.0
}
}
_ => 0.0,
}
}
pub fn update(&mut self) {
self.frame_count += 1;
let current_measurement = self.measure();
let last_measurement = self
.measurements
.back()
.copied()
.unwrap_or(FrameCountMeasurement {
time: self.creation_time,
frame_count: 0,
});
if (current_measurement.time - last_measurement.time) >= self.sample_period {
self.measurements.push_back(current_measurement);
while self.measurements.len() > self.max_samples {
self.measurements.pop_front();
}
}
}
fn measure(&self) -> FrameCountMeasurement {
FrameCountMeasurement {
time: Instant::now(),
frame_count: self.frame_count,
}
}
}
How to use:
Create the counter:
let mut fps_counter = FpsCounter::new(Duration::from_millis(100), 5);
Call fps_counter.update() on every frame drawn.
Call fps_counter.fps() whenever you like to display current FPS.
Now, the key is in parameters to FpsCounter::new() method: sample_period is how responsive fps() is to changes in framerate, and samples controls how quickly fps() ramps up or down to the actual framerate. So if you choose 10 ms and 100 samples, fps() would react almost instantly to any change in framerate - basically, FPS value on the screen would jitter like crazy, but since it's 100 samples, it would take 1 second to match the actual framerate.
So my choice of 100 ms and 5 samples means that displayed FPS counter doesn't make your eyes bleed by changing crazy fast, and it would match your actual framerate half a second after it changes, which is sensible enough for a game.
Since sample_period * samples is averaging time span, you don't want it to be too short if you want a reasonably accurate FPS counter.
store a start time and increment your framecounter once per loop? every few seconds you could just print framecount/(Now - starttime) and then reinitialize them.
edit: oops. double-ninja'ed

Graph (Chart) Algorithm

Does anyone have a decent algorithm for calculating axis minima and maxima?
When creating a chart for a given set of data items, I'd like to be able to give the algorithm:
the maximum (y) value in the set
the minimum (y) value in the set
the number of tick marks to appear on the axis
an optional value that must appear as a tick (e.g. zero when showing +ve and -ve values)
The algorithm should return
the largest axis value
the smallest axis value (although that could be inferred from the largest, the interval size and the number of ticks)
the interval size
The ticks should be at a regular interval should be of a "reasonable" size (e.g. 1, 3, 5, possibly even 2.5, but not any more sig figs).
The presence of the optional value will skew this, but without that value the largest item should appear between the top two tick marks, the lowest value between the bottom two.
This is a language-agnostic question, but if there's a C#/.NET library around, that would be smashing ;)
OK, here's what I came up with for one of our applications. Note that it doesn't deal with the "optional value" scenario you mention, since our optional value is always 0, but it shouldn't be hard for you to modify.
Data is continually added to the series so we just keep the range of y values up to date by inspecting each data point as its added; this is very inexpensive and easy to keep track of. Equal minimum and maximum values are special cased: a spacing of 0 indicates that no markers should be drawn.
This solution isn't dissimilar to Andrew's suggestion above, except that it deals, in a slightly kludgy way with some arbitrary fractions of the exponent multiplier.
Lastly, this sample is in C#. Hope it helps.
private float GetYMarkerSpacing()
{
YValueRange range = m_ScrollableCanvas.
TimelineCanvas.DataModel.CurrentYRange;
if ( range.RealMinimum == range.RealMaximum )
{
return 0;
}
float absolute = Math.Max(
Math.Abs( range.RealMinimum ),
Math.Abs( range.RealMaximum ) ),
spacing = 0;
for ( int power = 0; power < 39; ++power )
{
float temp = ( float ) Math.Pow( 10, power );
if ( temp <= absolute )
{
spacing = temp;
}
else if ( temp / 2 <= absolute )
{
spacing = temp / 2;
break;
}
else if ( temp / 2.5 <= absolute )
{
spacing = temp / 2.5F;
break;
}
else if ( temp / 4 <= absolute )
{
spacing = temp / 4;
break;
}
else if ( temp / 5 <= absolute )
{
spacing = temp / 5;
break;
}
else
{
break;
}
}
return spacing;
}
I've been using the jQuery flot graph library. It's open source and does axis/tick generation quite well. I'd suggest looking at it's code and pinching some ideas from there.
I can recommend the following:
Set a visually appealing minimum number of major lines. This will depend on the nature of the data that you're presenting and the size of the plot you're doing, but 7 is a pretty good number
Choose the exponent and the multiplier based on a progression of 1, 2, 5, 10, etc. that will give you at least the minimum number of major lines. (ie. (max-min)/(scale x 10^exponent) >= minimum_tick_marks)
Find the minimum integer multiple of your exponent and multiplier that fits within your range. This will be the first major tick. The rest of the ticks are derived from this.
This was used for an application that allowed arbitrary scaling of data an seemed to work well.

Resources