Meaning of axis of figures of simulation or performance modeling papers - performance

I am reading some papers on simulation and performance modeling. The Y axis in some figures is labeled "Seconds per Simulation Day". I am not sure what it actually means. It span from 0, 20, 40 to 120.
Another label is "Simulation years per day". I guess it means the guest OS inside simulation environment thinks it has passed several years while actually it just passed a day in the real world? But I guess simulation should slow down the execution, so I guess inside simulation environment passed several hours while actually it just passed a day in the real world would be more reasonable.
Thanks.

Without seeing the paper, I assume they are trying to compare the CPU time it takes to get to some physical time in a simulation.
So "Seconds per Simulation Day" is likely the walltime it took to get 24 hours in the simulation.
Likewise, "Simulation Years per Day" is physical time of simulation/real life day.
Of course, without seeing the paper it's impossible to know for sure. It's also possible they are looking at CPU-seconds or CPU-days, which would be nCPUs*walltime.

Simulations typically run in discrete time units, called time steps. If you'd like to simulate a certain process that spans certain time in the simulation, you would have to perform certain number of time steps. If the length of a time step is fixed, the number of steps is then just the simulated time divided by the length of the time step. Calculations in each time step take certain amount of time and the total run time for the simulation would equal the number of time steps times the time it takes to perform one time step:
(1) total_time = (simulation_time / timestep_length) * run_time_per_timestep
Now several benchmark parameters can be obtained by placing different parameters on the left hand side. E.g. if you fix simulation_time = 1 day then total_time would give you the total simulation run time, i.e.
(2) seconds_per_sim_day = (1 day / timestep_length) * run_time_per_timestep
Large values of seconds_per_sim_day could mean:
it takes too much time to compute a single time step, i.e. run_time_per_timestep is too high -> the computation algorithm should be optimised for speed;
the time step is too short -> search for better algorithms that can accept larger time steps and still produce (almost) the same result.
On the other hand, if you solve (1) for simulation_time and fix total_time = 1 day, you get the number of time steps that can be performed per day times the length of the time step, or the total simulation time that can be achieved per day of computation:
(3) simulation_time_per_day = (1 day / run_time_per_step) * timestep_length
Now one can observe that:
larger time steps lead to larger values of simulation_time_per_day, i.e. longer simulations can be computed;
if it takes too much time to compute a time step, the value of simulation_time_per_day would go down.
Usually those figures could be used when making decisions about buying CPU time at some computing centre. For example, you would like to simulate 100 years, then just divide that by the amount of simulation years per day and you get how many compute days you would have to pay (or wait) for. Larger values of simulation_time_per_day definitely benefit you in this case. If, on the other hand, you only have 10 compute days at your disposal, then you can compute how long of a simulation could be computed and make some decisions, e.g. more short simulations but with many different parameters vs. less but longer simulations with some parameters that you have predicted to be the optimal ones.
In real life things are much more complicated. Usually computing each time step could take different time (although there are cases where each time step takes exactly the same amount of time as all other time steps) and it would strongly depend on the simluation size, configuration, etc. That's why standardised tests exist and usually some averaged value is reported.
Just to summarise: given that all test parameters are kept equal,
faster computers would give less "seconds per simulation day" and more "simulation years per day"
slower computers would give more "seconds per simulation day" and less "simulation years per day"
By the way, both quantites are reciprocial and related by this simple equation:
simuation_years_per_day = 236,55 / seconds_per_simulation_day
(that is "simulation years per day" equals 86400 divided by "seconds per simulation day" /which gives you the simulation days per day/ and then dividied by 365.25 to convert the result into years)
So it doesn't really matter if "simulation years per day" or "seconds per simulation day" is presented. One just have to chose the representation which clearly shows how much better the newer system is from the previous/older/existing one :)

Related

Are the decimal components of Unix timestamps and UTC time synced?

Conventional time is meant to stay in sync with the rotation of the earth, and so is shifted with leap years and leap seconds, while Unix time is meant to measure the number of seconds since midnight Jan 1 1970. As such, the two drift apart over time.
But what about the decimals? It seems to me that if you took just the decimal portion of UTC, Unix time, and frankly any other time zone, they should line up except during the exact time a leap second or leap smear is taking place.
Are the decimal components of Unix timestamps and UTC time synced (except during such events)?
The reason leap seconds are issued is because we have 2 different definitions of measuring a second:
As ​1⁄86400 of 1 rotation of the earth (a day)
A more stable definition from the SI standard: https://en.wikipedia.org/wiki/Second
These 2 seconds are not equal length. In science and computing we prefer something very exact, and for clocks we prefer the second to be 1⁄86400 of a day.
To make clocks on computers match up with our expectation of the rotation-based clock, we add or remove seconds in the form of leap seconds.
What's really going on is that the 'length' of these 2 second definitions is different and keeps changing (compared to each other). Once the length has caused 1 the clock to drift far enough we just add a second to our computers to match the other definition.
But this drift is not instant. It happens over time. This means that the both these clocks slowly drift apart.
The suggestion that the 'decimals' are the same doesn't really make that much sense then. The difference between these decimals grow and grow until we have to add or remove a second to make them closer together again. The Earth's rotation isn't suddenly an extra second faster one day.
So when you ask the question: are they synced? It's asking whether the rotation of the earth is synced. We don't yet have the power to make the earth spin slower or faster ;)

Getting the maximum data an algorithm can process in a certain time span

Well meanwhile it's the second time I got an exercise where I have to determine (in this case it's about sorting algorithms) how many numbers I can sort with a certain algorithm (on my own computer) so that the algorithm would run exactly one minute.
This is a practical exercise, means I must generate enough numbers so it would run that long. Now I ask myself, since I haven't had this problem in all ten years of programming: How can I possibly do this? My first attempt was a bit brute-forcy which resulted in an instant StackOverflow.
I could make an array (or multiple) and fill them up with random numbers, but to determine how many would end up in one minute runtime would be a terrible long task since you would always need to wait.
What can I do to efficiently find out about this? Measuring the difference between let's say 10 and 20 numbers and calculate how much it would take to fill a minute? Sounds easy, but algorithms (especially sorting algorithms) are rarely linear.
You know time complexity for each algorithm in question. For example, bubble sort takes O(n*n) time. Make relatively small sample run - D=1000 records, measure the time it takes (T milliseconds). For example, it takes 15 seconds = 15000 milliseconds.
Now with more or less accuracy you can expect that D*2 records will be processed 4 times slower. And vice versa - you need about D* sqrt(60000/T) records to process them in 1 minute. For example, you need D* sqrt(60000/15000)=D* sqrt(4)=D*2=2000 records.
This method is not accurate enough to get exact number, and in most cases exact number of records is not set, it fluctuates from run to run. Also for many algorithms time it takes depends on values in your record set. For example, worst case for quicksort is O(nn), while normal case is O(nlog(n))
You could use something like this:
long startTime = System.getCurrentTimeMillis();
int times = 0;
boolean done = false;
while(!done){
//run algorithm
times++;
if(System.getCurrentTimeMillis()-startTime >= 60000)
done = true;
}
Or if you don't want to wait that long you can can replace the 60000 by 1000 and then multiply the times by 60, it won't be very accurate though.
It would be time consuming to generate a new number every time, so you can use an array that you populate beforehand and then access with the times variable, or you can always use the same variable, which you know would be most time consuming to process so that you get the minimum amount of times that it would run in a minute.

Density of time events

I am working on an assignment where I am supposed to compute the density of an event. Let's say that a certain event happens 5 times within seconds, it would mean that it would have a higher density than if it were to happen 5 times within hours.
I have in my possession, the time at which the event happens.
I was first thinking about computing the elapsed time between each two successive events and then play with the average and mean of these values.
My problem is that I do not know how to accurately represent this notion of density through mathematics. Let's say that I have 5 events happening really close to each other, and then a long break, and then again 5 events happening really close to each other. I would like to be able to represent this as high density. How should I go about it?
In the last example, I understand that my mean won't be truly representative but that my standard deviation will show that. However, how could I have a single density value (let's say between 0 and 1) with which I could rank different events?
Thank you for your help!
I would try the harmonic mean, which represents the rate at which your events happen, by still giving you an averaged time value. It is defined by :
I think its behaviour is close to what you expect as it measures what you want, but not between 0 and 1 and with inverse tendencies (small values mean dense, large values mean sparse). Let us go through a few of your examples :
~5 events in an hour. Let us suppose for simplicity there is 10 minutes between each event. Then we have H = 6 /(6 * 1/10) = 10
~5 events in 10 minutes, then nothing until the end of the hour (50 minutes). Let us suppose all short intervals are 2.5 minutes, then H = 6 / (5/2.5 + 1/50) = 6 * 50 / 101 = 2.97
~5 events in 10 minutes, but this cycle restarts every half hour thus we have 20 minutes as the last interval instead of 50. Then we get H = 6 / (5/2.5 + 1/20) = 6 * 20 / 41 = 2.92
As you can see the effect of the longer and rarer values in a set is diminished by the fact that we use inverses, thus less weight to the "in between bursts" behaviour. Also you can compare behaviours with the same "burst density" but that do not happen at the same frequency, and you will get numbers that are close but whose ordering still reflects this difference.
For density to make sense you need to define 2 things:
the range where you look at it,
and the unit of time
After that you can say for example, that from 12:00 to 12:10 the density of the event was an average of 10/minute.
What makes sense in your case obviously depends on what your input data is. If your measurement lasts for 1 hour and you have millions of entries then probably seconds or milliseconds are better choice for unit. If you measure for a week and have a few entries then day is a better unit.

Comparison of Sorting Algorithms using running time in terms of seconds

I have devised a test in order to compare the different running times of my sorting algorithm with Insertion sort, bubble sort, quick sort, selection sort, and shell sort. I have based my test using the test done in this website http://warp.povusers.org/SortComparison/index.html, but I modified my test a bit.
I set up a test manager program server which generates the data, and the test manager sends it to the clients that run the different algorithms, therefore they are sorting the same data to have no bias.
I noticed that the insertion sort, bubble sort, and selection sort algorithms really did run for a very long time (some more than 15 minutes) just to sort one given data for sizes of 100,000 and 1,000,000.
So I changed the number of runs per test case for those two data sizes. My original runs for the 100,000 was 500 but I reduced it to 15, and for 1,000,000 was 100 and I reduced it to 3.
Now my professor doubts the credibility as to why I've reduced it that much, but as I've observed the running time for sorting a specific data distribution varied only by a small percentage, which is why I still find it that even though I've reduced it to that much I'd still be able to approximate the average runtime for that specific test case of that algorithm.
My question now is, is my assumption wrong? Does the machine at times make significant running time changes (>50% changes), like say for example sorting the same data over and over if a first run would give it 0.3 milliseconds will the second run give as much difference as making it run for 1.5 seconds? Because from my observation, the running times don't vary largely given the same type of test distribution (e.g. completely random, completely sorted, completely reversed).
What you are looking for is a way to measure error in your experiments. My favorite book on subject is Error Analysis by Taylor and Chapter 4 has what you need which I'll summarize here.
You need to calculate Standard error of the mean or SDOM. First calculate mean and standard deviation (formulas are on Wikipedia and quite simple). Your SDOM is standard deviation divided by square root of number of measurements. Assuming your timings have Normal distribution (which it should), the twice the value of SDOM is a very common way to specify +/- error.
For example, let's say you run sorting algorithm 5 times and get following numbers: 5, 6, 7, 4, 5. Then mean is 5.4 and standard deviation is 1.1. Therefor SDOM is 1.1/sqrt(5) = 0.5. So 2*SDOM = 1. Now you can say that algorithm rum time was 5.4 ± 1. You professor can determine if this is acceptable error in measurement. Notice that as you take more readings, your SDOM, i.e. plus or minus error, goes down inversely proportional to square root of N. Twice of SDOM interval has 95% probability or confidence that the true value lies within the interval which is accepted standard.
Also you most likely want to measure performance by measuring CPU time instead of simple timer. Modern CPUs are too complex with various cache level and pipeline optimizations and you might end up getting less accurate measurement if you are using timer. More about CPU time is in this answer: How can I measure CPU time and wall clock time on both Linux/Windows?
It absolutely does. You need a variety of "random" samples in order to be able to draw proper conclusions about the population.
Look at it this way. It takes a long time to poll 100,000 people in the U.S. about their political stance. If we reduce the sample size to 100 people in order to complete it faster, we not only reduce the precision of our final result (2 decimal places rather than 5), we also introduce a larger chance that the members of the sample have a specific bias (there is a greater chance that 100 people out of 3xx,000,000 think the same way than 100,000 out of those same 3xx,000,000).
Your professor is right, however he's not provided the details that I mention some of them here :
Sampling issue: It's right that you generate some random numbers and feed them to your sorting methods, but with a few test cases indeed you're biased cause almost all of the random functions are biased to some extent (specially to the state of machine or time at the moment), so you should use more and more test cases to be more confident about the randomness.
Machine state: Suppose you've provide perfect data (fully representative of a uniform distribution), the performance of the electro-mechanical devises like computers may vary in different situations, so you should try for considerable times to smooth the effects of these phenomena.
Note : In advanced technical reports, you should provide a confidence coefficient for the answers you provide derived from statistical analysis, and proven step by step, but if you don't need to be that much exact, simply increase these :
The size of the data
The number of tests

Smart progress bar ETA computation

In many applications, we have some progress bar for a file download, for a compression task, for a search, etc. We all often use progress bars to let users know something is happening. And if we know some details like just how much work has been done and how much is left to do, we can even give a time estimate, often by extrapolating from how much time it's taken to get to the current progress level.
(source: jameslao.com)
But we've also seen programs which this Time Left "ETA" display is just comically bad. It claims a file copy will be done in 20 seconds, then one second later it says it's going to take 4 days, then it flickers again to be 20 minutes. It's not only unhelpful, it's confusing!
The reason the ETA varies so much is that the progress rate itself can vary and the programmer's math can be overly sensitive.
Apple sidesteps this by just avoiding any accurate prediction and just giving vague estimates!
(source: autodesk.com)
That's annoying too, do I have time for a quick break, or is my task going to be done in 2 more seconds? If the prediction is too fuzzy, it's pointless to make any prediction at all.
Easy but wrong methods
As a first pass ETA computation, probably we all just make a function like if p is the fractional percentage that's done already, and t is the time it's taken so far, we output t*(1-p)/p as the estimate of how long it's going to take to finish. This simple ratio works "OK" but it's also terrible especially at the end of computation. If your slow download speed keeps a copy slowly advancing happening overnight, and finally in the morning, something kicks in and the copy starts going at full speed at 100X faster, your ETA at 90% done may say "1 hour", and 10 seconds later you're at 95% and the ETA will say "30 minutes" which is clearly an embarassingly poor guess.. in this case "10 seconds" is a much, much, much better estimate.
When this happens you may think to change the computation to use recent speed, not average speed, to estimate ETA. You take the average download rate or completion rate over the last 10 seconds, and use that rate to project how long completion will be. That performs quite well in the previous overnight-download-which-sped-up-at-the-end example, since it will give very good final completion estimates at the end. But this still has big problems.. it causes your ETA to bounce wildly when your rate varies quickly over a short period of time, and you get the "done in 20 seconds, done in 2 hours, done in 2 seconds, done in 30 minutes" rapid display of programming shame.
The actual question:
What is the best way to compute an estimated time of completion of a task, given the time history of the computation? I am not looking for links to GUI toolkits or Qt libraries. I'm asking about the algorithm to generate the most sane and accurate completion time estimates.
Have you had success with math formulas? Some kind of averaging, maybe by using the mean of the rate over 10 seconds with the rate over 1 minute with the rate over 1 hour? Some kind of artificial filtering like "if my new estimate varies too much from the previous estimate, tone it down, don't let it bounce too much"? Some kind of fancy history analysis where you integrate progress versus time advancement to find standard deviation of rate to give statistical error metrics on completion?
What have you tried, and what works best?
Original Answer
The company that created this site apparently makes a scheduling system that answers this question in the context of employees writing code. The way it works is with Monte Carlo simulation of future based on the past.
Appendix: Explanation of Monte Carlo
This is how this algorithm would work in your situation:
You model your task as a sequence of microtasks, say 1000 of them. Suppose an hour later you completed 100 of them. Now you run the simulation for the remaining 900 steps by randomly selecting 90 completed microtasks, adding their times and multiplying by 10. Here you have an estimate; repeat N times and you have N estimates for the time remaining. Note the average between these estimates will be about 9 hours -- no surprises here. But by presenting the resulting distribution to the user you'll honestly communicate to him the odds, e.g. 'with the probability 90% this will take another 3-15 hours'
This algorithm, by definition, produces complete result if the task in question can be modeled as a bunch of independent, random microtasks. You can gain a better answer only if you know how the task deviates from this model: for example, installers typically have a download/unpacking/installing tasklist and the speed for one cannot predict the other.
Appendix: Simplifying Monte Carlo
I'm not a statistics guru, but I think if you look closer into the simulation in this method, it will always return a normal distribution as a sum of large number of independent random variables. Therefore, you don't need to perform it at all. In fact, you don't even need to store all the completed times, since you'll only need their sum and sum of their squares.
In maybe not very standard notation,
sigma = sqrt ( sum_of_times_squared-sum_of_times^2 )
scaling = 900/100 // that is (totalSteps - elapsedSteps) / elapsedSteps
lowerBound = sum_of_times*scaling - 3*sigma*sqrt(scaling)
upperBound = sum_of_times*scaling + 3*sigma*sqrt(scaling)
With this, you can output the message saying that the thing will end between [lowerBound, upperBound] from now with some fixed probability (should be about 95%, but I probably missed some constant factor).
Here's what I've found works well! For the first 50% of the task, you assume the rate is constant and extrapolate. The time prediction is very stable and doesn't bounce much.
Once you pass 50%, you switch computation strategy. You take the fraction of the job left to do (1-p), then look back in time in a history of your own progress, and find (by binary search and linear interpolation) how long it's taken you to do the last (1-p) percentage and use that as your time estimate completion.
So if you're now 71% done, you have 29% remaining. You look back in your history and find how long ago you were at (71-29=42%) completion. Report that time as your ETA.
This is naturally adaptive. If you have X amount of work to do, it looks only at the time it took to do the X amount of work. At the end when you're at 99% done, it's using only very fresh, very recent data for the estimate.
It's not perfect of course but it smoothly changes and is especially accurate at the very end when it's most useful.
Whilst all the examples are valid, for the specific case of 'time left to download', I thought it would be a good idea to look at existing open source projects to see what they do.
From what I can see, Mozilla Firefox is the best at estimating the time remaining.
Mozilla Firefox
Firefox keeps a track of the last estimate for time remaining, and by using this and the current estimate for time remaining, it performs a smoothing function on the time.
See the ETA code here. This uses a 'speed' which is previously caculated here and is a smoothed average of the last 10 readings.
This is a little complex, so to paraphrase:
Take a smoothed average of the speed based 90% on the previous speed and 10% on the new speed.
With this smoothed average speed work out the estimated time remaining.
Use this estimated time remaining, and the previous estimated time remaining to created a new estimated time remaining (in order to avoid jumping)
Google Chrome
Chrome seems to jump about all over the place, and the code shows this.
One thing I do like with Chrome though is how they format time remaining.
For > 1 hour it says '1 hrs left'
For < 1 hour it says '59 mins left'
For < 1 minute it says '52 secs left'
You can see how it's formatted here
DownThemAll! Manager
It doesn't use anything clever, meaning the ETA jumps about all over the place.
See the code here
pySmartDL (a python downloader)
Takes the average ETA of the last 30 ETA calculations. Sounds like a reasonable way to do it.
See the code here/blob/916f2592db326241a2bf4d8f2e0719c58b71e385/pySmartDL/pySmartDL.py#L651)
Transmission
Gives a pretty good ETA in most cases (except when starting off, as might be expected).
Uses a smoothing factor over the past 5 readings, similar to Firefox but not quite as complex. Fundamentally similar to Gooli's answer.
See the code here
I usually use an Exponential Moving Average to compute the speed of an operation with a smoothing factor of say 0.1 and use that to compute the remaining time. This way all the measured speeds have influence on the current speed, but recent measurements have much more effect than those in the distant past.
In code it would look something like this:
alpha = 0.1 # smoothing factor
...
speed = (speed * (1 - alpha)) + (currentSpeed * alpha)
If your tasks are uniform in size, currentSpeed would simply be the time it took to execute the last task. If the tasks have different sizes and you know that one task is supposed to be i,e, twice as long as another, you can divide the time it took to execute the task by its relative size to get the current speed. Using speed you can compute the remaining time by multiplying it by the total size of the remaining tasks (or just by their number if the tasks are uniform).
Hopefully my explanation is clear enough, it's a bit late in the day.
In certain instances, when you need to perform the same task on a regular basis, it might be a good idea of using past completion times to average against.
For example, I have an application that loads the iTunes library via its COM interface. The size of a given iTunes library generally do not increase dramatically from launch-to-launch in terms of the number of items, so in this example it might be possible to track the last three load times and load rates and then average against that and compute your current ETA.
This would be hugely more accurate than an instantaneous measurement and probably more consistent as well.
However, this method depends upon the size of the task being relatively similar to the previous ones, so this would not work for a decompressing method or something else where any given byte stream is the data to be crunched.
Just my $0.02
First off, it helps to generate a running moving average. This weights more recent events more heavily.
To do this, keep a bunch of samples around (circular buffer or list), each a pair of progress and time. Keep the most recent N seconds of samples. Then generate a weighted average of the samples:
totalProgress += (curSample.progress - prevSample.progress) * scaleFactor
totalTime += (curSample.time - prevSample.time) * scaleFactor
where scaleFactor goes linearly from 0...1 as an inverse function of time in the past (thus weighing more recent samples more heavily). You can play around with this weighting, of course.
At the end, you can get the average rate of change:
averageProgressRate = (totalProgress / totalTime);
You can use this to figure out the ETA by dividing the remaining progress by this number.
However, while this gives you a good trending number, you have one other issue - jitter. If, due to natural variations, your rate of progress moves around a bit (it's noisy) - e.g. maybe you're using this to estimate file downloads - you'll notice that the noise can easily cause your ETA to jump around, especially if it's pretty far in the future (several minutes or more).
To avoid jitter from affecting your ETA too much, you want this average rate of change number to respond slowly to updates. One way to approach this is to keep around a cached value of averageProgressRate, and instead of instantly updating it to the trending number you've just calculated, you simulate it as a heavy physical object with mass, applying a simulated 'force' to slowly move it towards the trending number. With mass, it has a bit of inertia and is less likely to be affected by jitter.
Here's a rough sample:
// desiredAverageProgressRate is computed from the weighted average above
// m_averageProgressRate is a member variable also in progress units/sec
// lastTimeElapsed = the time delta in seconds (since last simulation)
// m_averageSpeed is a member variable in units/sec, used to hold the
// the velocity of m_averageProgressRate
const float frictionCoeff = 0.75f;
const float mass = 4.0f;
const float maxSpeedCoeff = 0.25f;
// lose 25% of our speed per sec, simulating friction
m_averageSeekSpeed *= pow(frictionCoeff, lastTimeElapsed);
float delta = desiredAvgProgressRate - m_averageProgressRate;
// update the velocity
float oldSpeed = m_averageSeekSpeed;
float accel = delta / mass;
m_averageSeekSpeed += accel * lastTimeElapsed; // v += at
// clamp the top speed to 25% of our current value
float sign = (m_averageSeekSpeed > 0.0f ? 1.0f : -1.0f);
float maxVal = m_averageProgressRate * maxSpeedCoeff;
if (fabs(m_averageSeekSpeed) > maxVal)
{
m_averageSeekSpeed = sign * maxVal;
}
// make sure they have the same sign
if ((m_averageSeekSpeed > 0.0f) == (delta > 0.0f))
{
float adjust = (oldSpeed + m_averageSeekSpeed) * 0.5f * lastTimeElapsed;
// don't overshoot.
if (fabs(adjust) > fabs(delta))
{
adjust = delta;
// apply damping
m_averageSeekSpeed *= 0.25f;
}
m_averageProgressRate += adjust;
}
Your question is a good one. If the problem can be broken up into discrete units having an accurate calculation often works best. Unfortunately this may not be the case even if you are installing 50 components each one might be 2% but one of them can be massive. One thing that I have had moderate success with is to clock the cpu and disk and give a decent estimate based on observational data. Knowing that certain check points are really point x allows you some opportunity to correct for environment factors (network, disk activity, CPU load). However this solution is not general in nature due to its reliance on observational data. Using ancillary data such as rpm file size helped me make my progress bars more accurate but they are never bullet proof.
Uniform averaging
The simplest approach would be to predict the remaining time linearly:
t_rem := t_spent ( n - prog ) / prog
where t_rem is the predicted ETA, t_spent is the time elapsed since the commencement of the operation, prog the number of microtasks completed out of their full quantity n. To explain—n may be the number of rows in a table to process or the number of files to copy.
This method having no parameters, one need not worry about the fine-tuning of the exponent of attenuation. The trade-off is poor adaptation to a changing progress rate because all samples have equal contribution to the estimate, whereas it is only meet that recent samples should be have more weight that old ones, which leads us to
Exponential smoothing of rate
in which the standard technique is to estimate progress rate by averaging previous point measurements:
rate := 1 / (n * dt); { rate equals normalized progress per unit time }
if prog = 1 then { if first microtask just completed }
rate_est := rate; { initialize the estimate }
else
begin
weight := Exp( - dt / DECAY_T );
rate_est := rate_est * weight + rate * (1.0 - weight);
t_rem := (1.0 - prog / n) / rate_est;
end;
where dt denotes the duration of the last completed microtask and is equal to the time passed since the previous progress update. Notice that weight is not a constant and must be adjusted according the length of time during which a certain rate was observed, because the longer we observed a certain speed the higher the exponential decay of the previous measurements. The constant DECAY_T denotes the length of time during which the weight of a sample decreases by a factor of e. SPWorley himself suggested a similar modification to gooli's proposal, although he applied it to the wrong term. An exponential average for equidistant measurements is:
Avg_e(n) = Avg_e(n-1) * alpha + m_n * (1 - alpha)
but what if the samples are not equidistant, as is the case with times in a typical progress bar? Take into account that alpha above is but an empirical quotient whose true value is:
alpha = Exp( - lambda * dt ),
where lambda is the parameter of the exponential window and dt the amount of change since the previous sample, which need not be time, but any linear and additive parameter. alpha is constant for equidistant measurements but varies with dt.
Mark that this method relies on a predefined time constant and is not scalable in time. In other words, if the exactly same process be uniformly slowed-down by a constant factor, this rate-based filter will become proportionally more sensitive to signal variations because at every step weight will be decreased. If we, however, desire a smoothing independent of the time scale, we should consider
Exponential smoothing of slowness
which is essentially the smoothing of rate turned upside down with the added simplification of a constant weight of because prog is growing by equidistant increments:
slowness := n * dt; { slowness is the amount of time per unity progress }
if prog = 1 then { if first microtask just completed }
slowness_est := slowness; { initialize the estimate }
else
begin
weight := Exp( - 1 / (n * DECAY_P ) );
slowness_est := slowness_est * weight + slowness * (1.0 - weight);
t_rem := (1.0 - prog / n) * slowness_est;
end;
The dimensionless constant DECAY_P denotes the normalized progress difference between two samples of which the weights are in the ratio of one to e. In other words, this constant determines the width of the smoothing window in progress domain, rather than in time domain. This technique is therefore independent of the time scale and has a constant spatial resolution.
Futher research: adaptive exponential smoothing
You are now equipped to try the various algorithms of adaptive exponential smoothing. Only remember to apply it to slowness rather than to rate.
I always wish these things would tell me a range. If it said, "This task will most likely be done in between 8 min and 30 minutes," then I have some idea of what kind of break to take. If it's bouncing all over the place, I'm tempted to watch it until it settles down, which is a big waste of time.
I have tried and simplified your "easy"/"wrong"/"OK" formula and it works best for me:
t / p - t
In Python:
>>> done=0.3; duration=10; "time left: %i" % (duration / done - duration)
'time left: 23'
That saves one op compared to (dur*(1-done)/done). And, in the edge case you describe, possibly ignoring the dialog for 30 minutes extra hardly matters after waiting all night.
Comparing this simple method to the one used by Transmission, I found it to be up to 72% more accurate.
I don't sweat it, it's a very small part of an application. I tell them what's going on, and let them go do something else.

Resources