I started store listing experiment more than two months ago.
I already got more than 10K total installs. Results look pretty stable, but the experiment won't end.
Should I end the experiment manually? Should I wait till it completes? How much time can it take?
In my experience if the experiment is not finished by one month you should end it manually and start different experiment.
I had an experiment that continued for more than 3 years with no results
Related
Given Training data of an organisation meter reading recorded at an interval of 15 minutes each day .Like for some N days we will be provided with data.
And Now with help of this data we need to tell that on a particular day an organisation is closed or open . I need to know if any link can help in this matter if someone has worked into this field.
By closed I mean that on that day consumption of electricity will obviously be almost constant,Though this is just a single feature to take into account.
So how to predict this in best way ?
Maybe the use of threshold not suit well for different kind of organisation, but for sure there are a minimum consumption.
Maybe you would like to use the information of past 15 minutes to increase your reliability.
Adding one more input to your classifier.
In TFS, the Remaining Wokrk field is a Double.
How can I set minutes for it?
For quarters of hours this is "easy" (or less difficult)
1h30 = 1,5
1h45 = 1,75
But, Ex:
10min = 0,17
20min = 0,33
It's hard!
Suggestions? Or am I overreacting?
Why would you need such a level of precision? I personally only deal in hours, although I know people who do deal in half hours.
I rarely have any task in development that will take less than 30 minutes, and if I did, I would just round it up. I would not expect anyone to have a lot of tasks taking under 30 minutes, if you do, perhaps your tasks are too granular. My tasks are often 2/3 hours in size and I just change them to 0, when complete, or update remaining hours at the end of the day, or if I realise I have underestimated and need to add more on. I do not perform periodic updates through the day because they don't really benefit anyone.
Remaining work in TFS is expected to be used as part of the burn down chart to show the estimated number of hours remaining, so long as you are tracking on the line it doesn't really matter.
I all comes down to how you have to work, if you are free to adopt the agile principles, then just stick with hours/halves, if you have management that require to the minute remaining, then you will have to go with the way you have in your question.
I agree with Dave, no need to track at the minute granularity.
And if for some reason you really do need to track at that level, there is nothing saying that Remaining Work has to be entered in hours. It's just a numeric field, you could enter hours, minutes, days, whatever, so long as everybody consistently uses the same unit (I think there may be a couple reports that show it as hours, but as far as TFS is concerned it doesn't have to be hours if you don't want it to be).
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Estimating/forecasting download completion time
We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded.
To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying...
I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values.
Does anyone know?
Edit:
OK, looks like this has already been asked and answered:
Estimating/forecasting download completion time
My question even starts out with the same wording as this one. Funny...
Algo for a stable ‘download-time-remaining’ in a download window
Use filer, moving avarege can be good enough, for calculating speed.
S_filtered=S_filtered_prevous*(1-x) + S_current*x
Where x is inverse value of filtered samples, try different values from 0.1 - 0.01 (10-100)
If you have the size of the file, how much of it is downloaded, and the expected download speed
from previous files
from previous samples
from a dropdown the user picks from
from a speed test
you could provide improved estimates.
Recently, I ran across Ignite and thought that style of presentation (five minutes, 20 slides, rotated automatically after 15 seconds) could work for "brown bag" sessions. It would be a way to take a lunch hour and get through a range of topics for the purpose of knowledge sharing. Because each talk is only five minutes, it could get people introduced to many topics, and if any particular topic was uninteresting, it would be over quickly. Also, it could give more individuals opportunities to present. Thoughts on the format?
As far as logistics, I would guess you would need to leave 5 minutes on each end of the hour for a buffer, and probably only schedule 8 time slots (5 minutes + 1 minute transition between). That is 57 minutes (10 buffer + 8 * 5 presentations + 7 * 1 transitions) total time. Thoughts on this scheduling? Too aggressive?
Lastly, I'm not sure if PowerPoint (or another presentation application) has an auto advance feature, or if that has to be externally scripted. Any tips here?
I once used PowerPoint as a sort of screensaver: fading between slides every X seconds, and cycling the whole presentation. (Each slide had a photo on it.) Don't remember how I did it, just that it is possible.
That's an interesting idea for a presentation that I've never heard of before. Are you not allowed to field questions from the audience afterwards?
I'd worry that your transition is too short - 60 seconds to change presenters is quick. If you do, you should make sure that they don't have to setup the laptop or the presentation itself - opening PowerPoint alone will eat more than 60 seconds of switch times. (And if someone has to setup a laptop... forget it.)
This is essentially the same format that I use in my presentations. I use PowerPoint, which does have a slide auto-advance feature. When I went to Phoenix University I did dozens of these. The format does work well.
As far as being too aggressive: if as a presenter you are organized properly, you should have no problem at all with the format.
I agree with Thanatos, though. One minute between presentations is too short. I would go for six presentations per hour. That gives 2 minutes setup, and 5 minutes questions per presentation, for a total of 12 minutes per presentation.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Observing one year of estimations during a project I found out some strange things that make me wonder if evidence based scheduling would work right here?
individual programmers seem to have favorite numbers (e.g. 2,4,8,16,30 hours)
the big tasks seem to be underestimated by a fix value (about 2) but the standard deviation is low here
the small tasks (1 or 2 hours) are absolutely wide distributed. In average they have the same average underestimation factor of 2, but the standard deviation is high:
some 5 minute spelling issues are estimated with 1 hour
other bugfixes are estimated with 1 hour too, but take a day
So, is it really a good idea to let the programmers break down the 30 hours task down to 4 or 2 hours steps during estimations? Won't this raise the standard deviation? (Ok, let them break it down - but perhaps after the estimations?!)
Yes, your observations are exatly the sort of problems EBS is designed to solve.
Yes, it's important to break bigger tasks down. Shoot for 1-2 day tasks, more or less.
If you have things estimated at under 2 hrs, see if it makes sense to group them. (It might not -- that's ok!)
If you have tasks that are estimated at 3+ days, see if there might be a way to break them up into pieces. There should be. If the estimator says there is not, make them defend that assertion. If it turns out that the task really just takes 3 days, fine, but the more of these you have, the more you should be looking hard in the mirror and seeing if folks aren't gaming the system.
Count 4 & 5 day estimates as 2x and 4x as bad as 3 day ones. Anyone who says something is going to take longer than 5 days and it can't be broken down, tell them you want them to spend 4 hrs thinking about the problem, and how it can be broken down. Remember, that's a task, btw.
As you and your team practice this, you will get better at estimating.
...You will also start to recognize patterns of failure, and solutions will present themselves.
The point of Evidence based scheduling is to use Evidence as the basis for your schedule, not a collection of wild-assed guesses. It's A Good Thing...!
I think it is a good idea. When people break tasks down - they figure out the specifics of the task, You may get small deviations here and there, this way or the other, they may compensate or not...but you get a feeling of what is happening.
If you have a huge task of 30 hours - can take all 100. This is the worst that could happen.
Manage the risk - split down. You already figured out these small deviation - you know what to do with them.
So make sure developers also know what they do and say :)
"So, is it really a good idea to let the programmers break down the 30 hours task down to 4 or 2 hours steps during estimations? Won't this raise the standard deviation? (Ok, let them break it down - but perhaps after the estimations?!)"
I certainly don't get this question at all.
What it sounds like you're saying (you may not be saying this, but it sure sounds like it)
The programmers can't estimate at all -- the numbers are always rounded to "magic" values and off by 2x.
I can't trust them to both define the work and estimate the time it takes to do the work.
Only I know the correct estimate for the time required to do the task. It's not a round 1/2 day multiple. It's an exact number of minutes.
Here's my follow-up questions:
What are you saying? What can't you do? What problem are you having? Why do you think the programmers estimate badly? Why can't they be trusted to estimate?
From your statements, nothing is broken. You're able to plan and execute to that plan. I'd say you were totally successful and doing a great job at it.
Ok, I have the answer. Yes it is right AND the observations I made (see question) are absolutely understandable. To be sure I made a small Excel simulation to ensure myself of what I was guessing.
If you add multiple small task with a high standard deviation to bigger tasks, they will have a lower deviation, because the small task partially compensate the uncertainty.
So the answer is: Yes, it will work, if you break down your tasks, so that they are about the same length. It's because the simulation will do the compensation for bigger tasks automatically. I do not need to worry about a higher standard deviation in the smaller tasks.
But I am sure you must not mix up low estimated tasks with high estimated tasks - because they simply do not have the same variance.
Hence, it's always better to break them down. :)
The Excel simulation I made:
create 50 rows with these columns:
first - a fixed value 2 (the very homogeneous estimation)
20 columns with some random function (e.g. "=rand()*rand()*20")
make sums fore each column
add "=VARIANCE(..)" for each random column
and add a variance calculation for the sums
The variance for each column in my simulation was about 2-3 and the variance of the sums below 1.