Related
I have some sequence of situations let say 5 situations.
The point is I do not know how long each of them will last. I just know when each of those situations is finished.
so imagine this method is called randomly, it is not randomly but I could not know when will be called.
public void refresh(int i){
}
know I can update the progressbar by increasing him each time refresh will be called, but the user experience will be "see the bar increased by 20%, than wait and nothing moves, than again jump of 20% and stack again, and so on...
The thing I want to achieve is to have something more sooth.
I know this is not easy task but I know that someone have meet this problem before, so what are the workarounds ?
If the updates are going to arrive essentially at random, you might consider using a "spinner" instead of a progress bar. The end result is almost the same: the user has some idea that the process is underway. If you want to give a little more information, you can update a text label next to the spinner to indicate which part of the sequence is underway:
I've encountered this before. What I did was to estimate each one of my "situations" as to how much of the 100% progress they would need. So, task 1 would go up to 15% and task 2 might go up to 50%, etc. Keep in mind that this was just a general estimate. Then, within each situation, I would calculate the individual task for 100% completeness and increment that accordingly. Then, I would do the math to calculate the actual progress based on the given task i was in. So, if I was in task 2, I would be calculating based on 100% progress of the specific task calculated against the 35% of the total progress that I estimated task 2 to actually take. I hope this helps you.
Quick question, is it a progress bar you do want?
From what I read, you have no clue of estimated time, only that each chunk is 20%, and you do not know how long each chunk takes, or their time to run through, but you want the bar to move more smooth then large 20% completed chunks.
As you only got 5 measurement points unless you can add another layer of progress on each task to report back, else you are stuck guessing.
If you want to guess, you could make a rough estimate of the time a task will take. Again based on task at hand you might be able to make a good or bad estimate by hard coding expected time. The trick you ask for is looking like progress (hide the fact you are at an uncertain point in an operation 1/5)
Now have the counter move up slowly toward the next mark for the given time (say 1 minute) but as you approach the time, reduce the progress, this will mean that you will either see the bar move faster or slower as you approach expected next point. You will have progress, but it would be pure random guessing. If task end prior you have some "missing" % to make next progress gap larger with, if your slow, the bar slows down more and more.
This is a lot of extra work to show something that is misleading at best, to show the progress has not stuck, you might want to look for other options along with the progress bar.
In a GUI application I would like to show a progess dialog displaying how much time left for the task to accomplish, how may I get the remaining time before the task ends and count it down please? thanks
How to get the remaining time is something no-one but your application (or you) can know.
Assuming you have the code for this GUI application, to determine remaining time you simply need to know the total time a task takes and subtract the amount of time that passed since the start of the task.
We've all poked fun at the 'X minutes remaining' dialog which seems to be too simplistic, but how can we improve it?
Effectively, the input is the set of download speeds up to the current time, and we need to use this to estimate the completion time, perhaps with an indication of certainty, like '20-25 mins remaining' using some Y% confidence interval.
Code that did this could be put in a little library and used in projects all over, so is it really that difficult? How would you do it? What weighting would you give to previous download speeds?
Or is there some open source code already out there?
Edit: Summarising:
Improve estimated completion time via better algo/filter etc.
Provide interval instead of single time ('1h45-2h30 mins'), or just limit the precision ('about 2 hours').
Indicate when progress has stalled - although if progress consistently stalls and then continues, we should be able to deal with that. Perhaps 'about 2 hours, currently stalled'
More generally, I think you are looking for a way to give an instant mesure of the transfer speed, which is generally obtained by an average over a small period.
The problem is generally that in order to be reactive, the period is usually extremely small, which leads to the yoyo effect.
I would propose a very simple scheme, let's model it.
Think of a curve speed (y) over time (x).
the Instant Speed, is no more than reading y for the current x (x0).
the Average Speed, is no more than Integral(f(x), x in [x0-T,x0]) / T
the scheme I propose is to apply a filter, to give more weight to the last moments, while still taking into account the past moments.
It can be easily implement as g(x,x0,T) = 2 * (x - x0) + 2T which is a simple triangle of surface T.
And now you can compute Integral(f(x)*g(x,x0,T), x in [x0-T,x0]) / T, which should work because both functions are always positive.
Of course you could have a different g as long as it's always positive in the given interval and that its integral on the interval is T (so that its own average is exactly 1).
The advantage of this method is that because you give more weight to immediate events, you can remain pretty reactive even if you consider larger time intervals (so that the average is more precise, and less susceptible to hiccups).
Also, what I have rarely seen but think would provide more precise estimates would be to correlate the time used for computing the average to the estimated remaining time:
if I download a 5ko file, it's going to be loaded in an instant, no need to estimate
if I download a 15 Mo file, it's going to take between 2 minutes roughly, so I would like estimates say... every 5 seconds ?
if I download a 1.5 Go file, it's going to take... well around 200 minutes (with the same speed)... which is to say 3h20m... perhaps that an estimates every minute would be sufficient ?
So, the longer the download is going to take, the less reactive I need to be, and the more I can average out. In general, I would say that a window could cover 2% of the total time (perhaps except for the few first estimates, because people appreciate immediate feedback). Also, indicating progress by whole % at a time is sufficient. If the task is long, I was prepared to wait anyway.
I wonder, would a state estimation technique produce good results here? Something like a Kalman Filter?
Basically you predict the future by looking at your current model, and change the model at each time step to reflect the changes to the real world. I think this kind of technique is used for estimating the time left on your laptop battery, which can also vary according to use, age of battery, etc'.
see http://en.wikipedia.org/wiki/Kalman_filter for a more in depth description of the algorithm.
The filter also gives a variance measure, which could be used to indicate your confidence of the estimate (allthough, as was mentioned by other answers, it might not be the best idea to show this to the end user)
Does anyone know if this is actually used somewhere for download (or file copy) estimation?
Don't confuse your users by providing more information than they need. I'm thinking of the confidence interval. Skip it.
Internet download times are highly variable. The microwave interferes with WiFi. Usage varies by time of day, day of week, holidays, and releases of new exciting games. The server may be heavily loaded right now. If you carry your laptop to cafe, the results will be different than at home. So, you probably can't rely on historical data to predict the future of download speeds.
If you can't accurately estimate the time remaining, then don't lie to your user by offering such an estimate.
If you know how much data must be downloaded, you can provide % completed progress.
If you don't know at all, provide a "heartbeat" - a piece of moving UI that shows the user that things are working, even through you don't know how long remains.
Improving the estimated time itself: Intuitively, I would guess that the speed of the net connection is a series of random values around some temporary mean speed - things tick along at one speed, then suddenly slow or speed up.
One option, then, could be to weight the previous set of speeds by some exponential, so that the most recent values get the strongest weighting. That way, as the previous mean speed moves further into the past, its effect on the current mean reduces.
However, if the speed randomly fluctuates, it might be worth flattening the top of the exponential (e.g. by using a Gaussian filter), to avoid too much fluctuation.
So in sum, I'm thinking of measuring the standard deviation (perhaps limited to the last N minutes) and using that to generate a Gaussian filter which is applied to the inputs, and then limiting the quoted precision using the standard deviation.
How, though, would you limit the standard deviation calculation to the last N minutes? How do you know how long to use?
Alternatively, there are pattern recognition possibilities to detect if we've hit a stable speed.
I've considered this off and on, myself. I the answer starts with being conservative when computing the current (and thus, future) transfer rate, and includes averaging over longer periods, to get more stable estimates. Perhaps low-pass filtering the time that is displayed, so that one doesn't get jumps between 2 minutes and 2 days.
I don't think a confidence interval is going to be helpful. Most people wouldn't be able to interpret it, and it would just be displaying more stuff that is a guess.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Background
My team current is currently in the "bug fixing and polishing" phase of shipping a major rewrite. We still have a large stack of bugs to fix, scheduled against a couple of milestones. We've been asked to come up with estimates, of how much engineering effort is required to fix the bugs for each milestone.
For previous milestones, we've followed the following process:
Assign the bugs to the people that know the most about that area of the code, and will likely be the one to fix the bug.
Have each person go through the bugs that are assigned to them, and estimate how long they think it will take to fix the bugs, at an hour-level granularity. If a bug looks like it will potentially take more than a day or two to fix, they break the bug into likely subtasks, and estimate those.
Total the amount of work assigned to each person for each milestone, and try and balancing things out if people have drastically different amounts of work.
Multiply each person's total for each milestone by a "padding factor", to account for overly optimistic estimates (we've been using 1.5).
Take the largest total across the team members for a given release, and make that the time it will take for the team to close the existing bugs.
Estimate the number of bugs we expect to be created during the time it takes us to reach a particular milestone, and estimate how long on average, we think it will take to close each of these bugs. Add this on to the time to close the existing bugs for each release. This is our final number of the amount of work needed, delivered as a date by which we'll definitely ship that milestone.
This has been fairly accurate (we've come in pretty much spot on on our previous three milestones), but it's rather time consuming.
Current Problem
We've been asked to give estimates of the engineering time for upcoming milestones, but asked not to use the above process because it's too time consuming. Instead, as the tech lead of the team, I've been asked to provide estimates that are less certain, along with a certainty interval (ie, 1 month, plus or minus a week).
My primary estimation experience is with some variation of the method I described above (from a background of freelancing for a number of years). I've found that when I "shoot from the hip" on large tasks, I tend to be way off. I suspect it will be even worse when estimating how long it takes to fix bugs in areas of the code I don't know very well.
What tips, tricks or techniques have you found successful for estimating quickly, without breaking things down into fine grained tasks and estimating them?
Things that are not an option:
Not giving an estimate - I've tried this, it didn't fly:)
Picking a number and confidence interval that is ridiculously wide - I've considered this, but I don't think it'll fly either.
Evidence-base scheduling - We're using JIRA, which doesn't have any evidence-base scheduling tools written for it, and we can't migrate to FogBugz currently (BTW, if someone goes and writes an evidence-based scheduling plugin for JIRA, we would gladly pay for it).
The best tip for estimating: round up a heck of a lot.
It sounds like you're already an expert on the topic of estimation, and that you know the limitations of what's possible. It just isn't possible to estimate a task without assessing what needs doing to complete it!
Amount of time assessing is directly proportional to accuracy of estimate. And these things converge at the point when time assessing is so accurate you've solved the task, at that moment, you know exactly how long it takes.
Hmm, sorry, this may not be the answer you wanted to hear... it's just my thoughts on it though.
Be prepared to create a release at any time
Have the stake-holders prioritise the work done
Work on the highest priority items
Step 1. means you never miss a deadline.
Step 2. is a way of turning the question back on those who are asking you to estimate without spending time estimating.
Edit...
The above doesn't really answer your question, sorry.
The stake holders will want to prioritize work based on how long and expensive each task will be, and you are likely to be asked which of the highest prioritized changes you expect to be able to complete by the next deadline.
My technique that takes the least time is to use three times my impression of how long I think it would take me to do it.
You're looking for something taking longer than that, but not as long as your previous excellent estimates.
You'll still need to look at each bug, even if only to take a guess at whether it is easy, average, or tricky, or 1,2,4,8,16 or 32 hours work.
If you produce some code complexity metrics over your code base (eg cyclomatic complexity), and for each task, take a stab at which two or three portions of that code base will need to be changed the most, then estimate based on the assumption that the less complex portions of code will be quicker to change than the more complex portions. You could come up with some heuristics based on a few of your previous estimates, to use for each bug fix, giving an estimate of the time and variability required.
How about:
estimate=(bugs/devs)xdays (xK)
As simple as this is it's actually quite accurate. And can be estimated in 1minute.
It's confidence level is less than your detailed method, but I'd recommend you check your data on the last three milestones and check the difference between this quick estimate and your detailed estimate that will give you a "K" value representing your team's constant.
Be surprised.
Use Planning Poker, see the answers to How to estimate the length of a programming task
In simplest terms:
Your Most Absolutely Liberal Estimation * 3 = Your Estimate
The above may look like a joke, but it's not. I've used it many times. Time estimation on software projects of any kind is a game, just like making a deal with a car dealer. That formula will get you something to provide your management in a pinch and give you some room to play with as well.
However, if you're somehow able to get down to the more granular details (which is really the only way you'll be able to be more accurate), Google on Function Point Analysis, sometimes called "Fast Function Point Analysis" or "Fast Function Point Estimation".
Many folks out there have a myriad of spreadsheets, PDF's and the like that can help you estimate as quickly as possible. Check out the spreadsheets first as they'll have formulas built in for you.
Hope this helps!
You've been asking how to produce an estimate and an uncertainty interval. A better way to think of this is to do a worst-case estimate and a best-case estimate. Combine the two to have an estimate range. Well understood issues will naturally be more specific then the estimates for less-understood issues. For example, an estimate that looks like 1.5-2 days is probably for a well understood issue, an estimate that looks like 2-14 days would be typical for an issue not at all understood.
Limit the amount of investigation and time spent producing an estimate by allowing for a wider gap between the estimates. This works because its relatively easy to imagine realistic best case and worst case scenarios. When the uncertainty range is more than you're comfortable dealing with in the schedule, then take some time to understood the less understood issues. It may help to break them up.
I usually go for half-day granularity rather than hour granularity in my estimates if the work is expected to take more than a week overall.
public static class Time
{
/// <summary>
/// Estimates the hours.
/// </summary>
/// <param name="NumberPulledFromAss">The number pulled from ass.</param>
/// <param name="Priority">The priority.</param>
/// <param name="Complexity">The complexity.</param>
/// <returns>
/// a number in hours to estimate the time to complete a task.
/// Hey, you will be wrong anyway why waste more time than you need?
/// </returns>
public static int EstimateHours(int NumberPulledFromAss, int Priority, int Complexity)
{
var rand = new Random(NumberPulledFromAss);
var baseGuess = rand.Next(1, NumberPulledFromAss);
return (baseGuess + (Priority * Complexity)) * 2;
}
}
Your estimates are as accurate as the time you put into them. This time can be physical time breaking down the problem or drawing upon past experiences in areas you're familiar. If this isn't an option the try breaking the bugs/polish down into groups.
Trivial fix of a few hours.
Up to one day effort.
Very complex - one week effort.
Once you have these categorised then you can work out a rough guestimate.
Many hints may be useful in this article on an agile blog: Agile Estimating.
Calculating the variability in your estimate will take longer than calculating your estimate.
I'm currently working on an application that allows people to schedule "Shows" for an online radio station.
I want the ability for the user to setup a repeated event, say for example:-
"Manic Monday" show - Every Monday From 9-11
"Mid Month Madness" - Every Second Thursday of the Month
"This months new music" - 1st of every month.
What, in your opinion, is the best way to model this (based around an MVC/MTV structure).
Note: I'm actually coding this in Django. But I'm more interested in the theory behind it, rather than specific implementation details.
Ah, repeated events - one of the banes of my life, along with time zones. Calendaring is hard.
You might want to model this in terms of RFC2445. However, that may well give you far more flexibility - and complexity than you really want.
A few things to consider:
Do you need any finer granularity than a certain time on given dates? If you need to repeat based on time as well, it becomes trickier.
Consider date corner cases such as "the 30th of every month" and what that means for leap years
Consider time corner cases such as "1.30am every day" - sometimes 1.30am may happen twice, and sometimes it may not happen at all, due to daylight saving time
Do you need to share the schedule with people in other time zones? That makes life trickier again
Do you need to represent the number of times an event occurs, or a final date on which it occurs? ("Count" or "until" basically.) You may not need either, or you may need one or both.
I realise this is a list of things to think about more than a definitive answer, but I think it's important to define the parameters of your problem before you try to work out a solution.
From reading other posts, Martin Fowler describes recurring events the best.
http://martinfowler.com/apsupp/recurring.pdf
Someone implemented these classes for Java.
http://www.google.com/codesearch#vHK4YG0XgAs/src/java/org/chronicj/DateRange.java
I've had a thought that repeated events should be generated when the original event is saved, with a new model. This means I'm not doing random processing every time the calendar is loaded (and means I can also, for example, cancel one "Show" in a series) but also means that I have to limit this to a certain time frame, so if someone went say, a year into the future, they wouldn't see these repeated shows. But at some point, they'd have to (potentially) be re-generated.