Better algorithm for estimating download time [duplicate] - algorithm

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Estimating/forecasting download completion time
We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded.
To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying...
I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values.
Does anyone know?
Edit:
OK, looks like this has already been asked and answered:
Estimating/forecasting download completion time
My question even starts out with the same wording as this one. Funny...
Algo for a stable ‘download-time-remaining’ in a download window

Use filer, moving avarege can be good enough, for calculating speed.
S_filtered=S_filtered_prevous*(1-x) + S_current*x
Where x is inverse value of filtered samples, try different values from 0.1 - 0.01 (10-100)

If you have the size of the file, how much of it is downloaded, and the expected download speed
from previous files
from previous samples
from a dropdown the user picks from
from a speed test
you could provide improved estimates.

Related

algorithm to calculate ETA of a file downloading session [duplicate]

We've all poked fun at the 'X minutes remaining' dialog which seems to be too simplistic, but how can we improve it?
Effectively, the input is the set of download speeds up to the current time, and we need to use this to estimate the completion time, perhaps with an indication of certainty, like '20-25 mins remaining' using some Y% confidence interval.
Code that did this could be put in a little library and used in projects all over, so is it really that difficult? How would you do it? What weighting would you give to previous download speeds?
Or is there some open source code already out there?
Edit: Summarising:
Improve estimated completion time via better algo/filter etc.
Provide interval instead of single time ('1h45-2h30 mins'), or just limit the precision ('about 2 hours').
Indicate when progress has stalled - although if progress consistently stalls and then continues, we should be able to deal with that. Perhaps 'about 2 hours, currently stalled'
More generally, I think you are looking for a way to give an instant mesure of the transfer speed, which is generally obtained by an average over a small period.
The problem is generally that in order to be reactive, the period is usually extremely small, which leads to the yoyo effect.
I would propose a very simple scheme, let's model it.
Think of a curve speed (y) over time (x).
the Instant Speed, is no more than reading y for the current x (x0).
the Average Speed, is no more than Integral(f(x), x in [x0-T,x0]) / T
the scheme I propose is to apply a filter, to give more weight to the last moments, while still taking into account the past moments.
It can be easily implement as g(x,x0,T) = 2 * (x - x0) + 2T which is a simple triangle of surface T.
And now you can compute Integral(f(x)*g(x,x0,T), x in [x0-T,x0]) / T, which should work because both functions are always positive.
Of course you could have a different g as long as it's always positive in the given interval and that its integral on the interval is T (so that its own average is exactly 1).
The advantage of this method is that because you give more weight to immediate events, you can remain pretty reactive even if you consider larger time intervals (so that the average is more precise, and less susceptible to hiccups).
Also, what I have rarely seen but think would provide more precise estimates would be to correlate the time used for computing the average to the estimated remaining time:
if I download a 5ko file, it's going to be loaded in an instant, no need to estimate
if I download a 15 Mo file, it's going to take between 2 minutes roughly, so I would like estimates say... every 5 seconds ?
if I download a 1.5 Go file, it's going to take... well around 200 minutes (with the same speed)... which is to say 3h20m... perhaps that an estimates every minute would be sufficient ?
So, the longer the download is going to take, the less reactive I need to be, and the more I can average out. In general, I would say that a window could cover 2% of the total time (perhaps except for the few first estimates, because people appreciate immediate feedback). Also, indicating progress by whole % at a time is sufficient. If the task is long, I was prepared to wait anyway.
I wonder, would a state estimation technique produce good results here? Something like a Kalman Filter?
Basically you predict the future by looking at your current model, and change the model at each time step to reflect the changes to the real world. I think this kind of technique is used for estimating the time left on your laptop battery, which can also vary according to use, age of battery, etc'.
see http://en.wikipedia.org/wiki/Kalman_filter for a more in depth description of the algorithm.
The filter also gives a variance measure, which could be used to indicate your confidence of the estimate (allthough, as was mentioned by other answers, it might not be the best idea to show this to the end user)
Does anyone know if this is actually used somewhere for download (or file copy) estimation?
Don't confuse your users by providing more information than they need. I'm thinking of the confidence interval. Skip it.
Internet download times are highly variable. The microwave interferes with WiFi. Usage varies by time of day, day of week, holidays, and releases of new exciting games. The server may be heavily loaded right now. If you carry your laptop to cafe, the results will be different than at home. So, you probably can't rely on historical data to predict the future of download speeds.
If you can't accurately estimate the time remaining, then don't lie to your user by offering such an estimate.
If you know how much data must be downloaded, you can provide % completed progress.
If you don't know at all, provide a "heartbeat" - a piece of moving UI that shows the user that things are working, even through you don't know how long remains.
Improving the estimated time itself: Intuitively, I would guess that the speed of the net connection is a series of random values around some temporary mean speed - things tick along at one speed, then suddenly slow or speed up.
One option, then, could be to weight the previous set of speeds by some exponential, so that the most recent values get the strongest weighting. That way, as the previous mean speed moves further into the past, its effect on the current mean reduces.
However, if the speed randomly fluctuates, it might be worth flattening the top of the exponential (e.g. by using a Gaussian filter), to avoid too much fluctuation.
So in sum, I'm thinking of measuring the standard deviation (perhaps limited to the last N minutes) and using that to generate a Gaussian filter which is applied to the inputs, and then limiting the quoted precision using the standard deviation.
How, though, would you limit the standard deviation calculation to the last N minutes? How do you know how long to use?
Alternatively, there are pattern recognition possibilities to detect if we've hit a stable speed.
I've considered this off and on, myself. I the answer starts with being conservative when computing the current (and thus, future) transfer rate, and includes averaging over longer periods, to get more stable estimates. Perhaps low-pass filtering the time that is displayed, so that one doesn't get jumps between 2 minutes and 2 days.
I don't think a confidence interval is going to be helpful. Most people wouldn't be able to interpret it, and it would just be displaying more stuff that is a guess.

Estimating/forecasting download completion time

We've all poked fun at the 'X minutes remaining' dialog which seems to be too simplistic, but how can we improve it?
Effectively, the input is the set of download speeds up to the current time, and we need to use this to estimate the completion time, perhaps with an indication of certainty, like '20-25 mins remaining' using some Y% confidence interval.
Code that did this could be put in a little library and used in projects all over, so is it really that difficult? How would you do it? What weighting would you give to previous download speeds?
Or is there some open source code already out there?
Edit: Summarising:
Improve estimated completion time via better algo/filter etc.
Provide interval instead of single time ('1h45-2h30 mins'), or just limit the precision ('about 2 hours').
Indicate when progress has stalled - although if progress consistently stalls and then continues, we should be able to deal with that. Perhaps 'about 2 hours, currently stalled'
More generally, I think you are looking for a way to give an instant mesure of the transfer speed, which is generally obtained by an average over a small period.
The problem is generally that in order to be reactive, the period is usually extremely small, which leads to the yoyo effect.
I would propose a very simple scheme, let's model it.
Think of a curve speed (y) over time (x).
the Instant Speed, is no more than reading y for the current x (x0).
the Average Speed, is no more than Integral(f(x), x in [x0-T,x0]) / T
the scheme I propose is to apply a filter, to give more weight to the last moments, while still taking into account the past moments.
It can be easily implement as g(x,x0,T) = 2 * (x - x0) + 2T which is a simple triangle of surface T.
And now you can compute Integral(f(x)*g(x,x0,T), x in [x0-T,x0]) / T, which should work because both functions are always positive.
Of course you could have a different g as long as it's always positive in the given interval and that its integral on the interval is T (so that its own average is exactly 1).
The advantage of this method is that because you give more weight to immediate events, you can remain pretty reactive even if you consider larger time intervals (so that the average is more precise, and less susceptible to hiccups).
Also, what I have rarely seen but think would provide more precise estimates would be to correlate the time used for computing the average to the estimated remaining time:
if I download a 5ko file, it's going to be loaded in an instant, no need to estimate
if I download a 15 Mo file, it's going to take between 2 minutes roughly, so I would like estimates say... every 5 seconds ?
if I download a 1.5 Go file, it's going to take... well around 200 minutes (with the same speed)... which is to say 3h20m... perhaps that an estimates every minute would be sufficient ?
So, the longer the download is going to take, the less reactive I need to be, and the more I can average out. In general, I would say that a window could cover 2% of the total time (perhaps except for the few first estimates, because people appreciate immediate feedback). Also, indicating progress by whole % at a time is sufficient. If the task is long, I was prepared to wait anyway.
I wonder, would a state estimation technique produce good results here? Something like a Kalman Filter?
Basically you predict the future by looking at your current model, and change the model at each time step to reflect the changes to the real world. I think this kind of technique is used for estimating the time left on your laptop battery, which can also vary according to use, age of battery, etc'.
see http://en.wikipedia.org/wiki/Kalman_filter for a more in depth description of the algorithm.
The filter also gives a variance measure, which could be used to indicate your confidence of the estimate (allthough, as was mentioned by other answers, it might not be the best idea to show this to the end user)
Does anyone know if this is actually used somewhere for download (or file copy) estimation?
Don't confuse your users by providing more information than they need. I'm thinking of the confidence interval. Skip it.
Internet download times are highly variable. The microwave interferes with WiFi. Usage varies by time of day, day of week, holidays, and releases of new exciting games. The server may be heavily loaded right now. If you carry your laptop to cafe, the results will be different than at home. So, you probably can't rely on historical data to predict the future of download speeds.
If you can't accurately estimate the time remaining, then don't lie to your user by offering such an estimate.
If you know how much data must be downloaded, you can provide % completed progress.
If you don't know at all, provide a "heartbeat" - a piece of moving UI that shows the user that things are working, even through you don't know how long remains.
Improving the estimated time itself: Intuitively, I would guess that the speed of the net connection is a series of random values around some temporary mean speed - things tick along at one speed, then suddenly slow or speed up.
One option, then, could be to weight the previous set of speeds by some exponential, so that the most recent values get the strongest weighting. That way, as the previous mean speed moves further into the past, its effect on the current mean reduces.
However, if the speed randomly fluctuates, it might be worth flattening the top of the exponential (e.g. by using a Gaussian filter), to avoid too much fluctuation.
So in sum, I'm thinking of measuring the standard deviation (perhaps limited to the last N minutes) and using that to generate a Gaussian filter which is applied to the inputs, and then limiting the quoted precision using the standard deviation.
How, though, would you limit the standard deviation calculation to the last N minutes? How do you know how long to use?
Alternatively, there are pattern recognition possibilities to detect if we've hit a stable speed.
I've considered this off and on, myself. I the answer starts with being conservative when computing the current (and thus, future) transfer rate, and includes averaging over longer periods, to get more stable estimates. Perhaps low-pass filtering the time that is displayed, so that one doesn't get jumps between 2 minutes and 2 days.
I don't think a confidence interval is going to be helpful. Most people wouldn't be able to interpret it, and it would just be displaying more stuff that is a guess.

Is the expected time shown during file copy the best time or worst time?

When we copy files in windows, we get an expected time of completion. Is that time the best time or the worst time? Also are you assuming the environmental variables?
Raymond Chen had something to say about this...
If you implement such feature, let progress bar grow quicky to 90%. Then you can perform real job, no matter how long it will take. User experience will be much better than showing current progress ;-)
I can only guess, how the time is calculated. But many hours spent watching the copying window and seeing how the time estimate changes, here is my best estimate:
Windows is keeping a list of all the files to be copied
It is keeping track of the time and number of files already copied
The remaining time is calculated as :
Average time per file so far = time passed so far / files already copied
Estimated time needed for all files = avg time per file * number of files.
The calculation is repeated after a fixed time span has passed (maybe 5 seconds, maybe 30?)
It probably is a little more complex that I explained above, I suppose that the size of the file currently being copied, and the percentage that has been copied, goes into the calculation as well. That would explain why we see an estimate when only one file is being copied ;-).
So, in a direct answer to your question: It is neither the best nor the worst time, it is just a very weak estimate that is the less exact the more the file sizes differ from each other.
Or in other words: It was probably the fastest way (in terms of fast programming, as well as low cpu usage when running) that a programmer could think of that implemented a specified feature. I wouldn't be surprised if it was coded on a Friday afternoon...

What is the most effective way to present and communicate a performance improvement (e.g. percentages, raw data, graphics)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Is it better to describe improvements using percentages or just the differences in the numbers? For example if you improved the performance of a critical ETL SQL Query from 4000 msecs to 312 msecs how would you present it as an 'Accomplishment' on a performance review?
In currency. Money is the most effective medium for communicating value, which is what you're trying to use the performance review to demonstrate.
Person hours saved, (very roughly) estimated value of $NEW_THING_THE_COMPANY_CAN_DO_AS_RESULT, future hardware upgrades averted, etc.
You get the nice bonus that you show that you're sensitive to the company's financial position; a geek who can align himself with what the company is really about.
Take potato
Drench Potato in Lighter Fluid
Light potato on fire
Hand potato to boss
Make boss hold it for 4 seconds.
Ask boss how long those 4 seconds felt
Ask boss how much better half a second would have been
Bask in glory
It is always better to measure relative improvement.
So, if you brought it down to 312ms from 4000ms then it is an improvement of 3688ms, which is 92.2% of the original speed. So, you reduced the runtime by 92.2%. In other words, you brought the runtime down to only 7.8% of what it was originally.
Absolute numbers, on the other hand, usually are not that good since they are not comparable. (If your original runtime was 4,000,000ms then an improvement of 3688ms isn't that great.)
See this link for some nice chart suggestions.
Comparison to Requirements
If I have requirements (response time, throughput), I like to color code the absolute numbers like so:
Green: <= 80% of the requirement (response time); >= 120% of > the requirement (throughput)
No formatting: Meets the requirement.
Red: Does not meet the requirement.
Comparisons are interesting, but only if we have enough to see trends over time; Is our performance steadily improving or degrading? Ultimately, the business only cares if we're meeting the requirement. It's only when we don't that they ask for comparisons to previous releases.
Comparison of Benchmarks
If I'm comparing benchmarks to some baseline, then I like to use percentages, but only if the benchmark is a statistically significant change from the baseline.
Hardware Sizing
If I'm doing hardware sizing or capacity planning, then I like to express the performance as the absolute number plus the cost per transaction. For example:
System A: 1,000 transactions/second, $0.02/transaction
System B: 1,500 transactions/second, $0.04/transaction
Use whichever appears most impressive given the change. According to one method of calculation, that change sped up the query by 1,300%, which looks more impressive than 13x improvement, or
============= <-- old query
= <-- new query
Although the graph isn't a bad method.
If you can calculate the improvement in money, then go for that. One piece of software I wrote many years ago saved a few engineers a little bit of time each day. Figuring out the cost of salary, benefits, overhead and it turned into a savings of more than $12k per year for a small company.
-Adam
Rule of the thumb: Whichever sounds more impressive.
If you went from 10 tasks done in a period to 12, you could say you improved the performance by 20%
Saying you did two tasks more doesnt seem that impressive.
In your case, both numbers sound good, but try different representations and see what you get!
Sometimes graphics help a lot of the improvement is there on a number of factors, but the combined somehow does not look that cool
Example: You have 5 params A, B, C, D, E. You could make a bar chart with those 5 params and "before and after" values side by side for each param. That sure will look impressive.
God im starting to sound like my friend from marketing!
runs away screaming
you can make numbers and graphs say anything you want - the important thing is to make them say something meaningful and relevant to the audience you're presenting them to. if it's end users you can show them differences in the screen refreshes (something they understand), to managers perhaps the reduced number of servers they'll need in order to support the application ($ savings), financial...it's all about the $ how much did it save them. a general rule is the less technical the group the more graphical and dramatic you need to be.

Does evidence based scheduling work right with heterogenous estimations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Observing one year of estimations during a project I found out some strange things that make me wonder if evidence based scheduling would work right here?
individual programmers seem to have favorite numbers (e.g. 2,4,8,16,30 hours)
the big tasks seem to be underestimated by a fix value (about 2) but the standard deviation is low here
the small tasks (1 or 2 hours) are absolutely wide distributed. In average they have the same average underestimation factor of 2, but the standard deviation is high:
some 5 minute spelling issues are estimated with 1 hour
other bugfixes are estimated with 1 hour too, but take a day
So, is it really a good idea to let the programmers break down the 30 hours task down to 4 or 2 hours steps during estimations? Won't this raise the standard deviation? (Ok, let them break it down - but perhaps after the estimations?!)
Yes, your observations are exatly the sort of problems EBS is designed to solve.
Yes, it's important to break bigger tasks down. Shoot for 1-2 day tasks, more or less.
If you have things estimated at under 2 hrs, see if it makes sense to group them. (It might not -- that's ok!)
If you have tasks that are estimated at 3+ days, see if there might be a way to break them up into pieces. There should be. If the estimator says there is not, make them defend that assertion. If it turns out that the task really just takes 3 days, fine, but the more of these you have, the more you should be looking hard in the mirror and seeing if folks aren't gaming the system.
Count 4 & 5 day estimates as 2x and 4x as bad as 3 day ones. Anyone who says something is going to take longer than 5 days and it can't be broken down, tell them you want them to spend 4 hrs thinking about the problem, and how it can be broken down. Remember, that's a task, btw.
As you and your team practice this, you will get better at estimating.
...You will also start to recognize patterns of failure, and solutions will present themselves.
The point of Evidence based scheduling is to use Evidence as the basis for your schedule, not a collection of wild-assed guesses. It's A Good Thing...!
I think it is a good idea. When people break tasks down - they figure out the specifics of the task, You may get small deviations here and there, this way or the other, they may compensate or not...but you get a feeling of what is happening.
If you have a huge task of 30 hours - can take all 100. This is the worst that could happen.
Manage the risk - split down. You already figured out these small deviation - you know what to do with them.
So make sure developers also know what they do and say :)
"So, is it really a good idea to let the programmers break down the 30 hours task down to 4 or 2 hours steps during estimations? Won't this raise the standard deviation? (Ok, let them break it down - but perhaps after the estimations?!)"
I certainly don't get this question at all.
What it sounds like you're saying (you may not be saying this, but it sure sounds like it)
The programmers can't estimate at all -- the numbers are always rounded to "magic" values and off by 2x.
I can't trust them to both define the work and estimate the time it takes to do the work.
Only I know the correct estimate for the time required to do the task. It's not a round 1/2 day multiple. It's an exact number of minutes.
Here's my follow-up questions:
What are you saying? What can't you do? What problem are you having? Why do you think the programmers estimate badly? Why can't they be trusted to estimate?
From your statements, nothing is broken. You're able to plan and execute to that plan. I'd say you were totally successful and doing a great job at it.
Ok, I have the answer. Yes it is right AND the observations I made (see question) are absolutely understandable. To be sure I made a small Excel simulation to ensure myself of what I was guessing.
If you add multiple small task with a high standard deviation to bigger tasks, they will have a lower deviation, because the small task partially compensate the uncertainty.
So the answer is: Yes, it will work, if you break down your tasks, so that they are about the same length. It's because the simulation will do the compensation for bigger tasks automatically. I do not need to worry about a higher standard deviation in the smaller tasks.
But I am sure you must not mix up low estimated tasks with high estimated tasks - because they simply do not have the same variance.
Hence, it's always better to break them down. :)
The Excel simulation I made:
create 50 rows with these columns:
first - a fixed value 2 (the very homogeneous estimation)
20 columns with some random function (e.g. "=rand()*rand()*20")
make sums fore each column
add "=VARIANCE(..)" for each random column
and add a variance calculation for the sums
The variance for each column in my simulation was about 2-3 and the variance of the sums below 1.

Resources