Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have been working on Flex for last couple of months and as this was the first time I had to actually do Flex I ended up underestimating the project tasks which resulted in a delay. So how does one estimate the project timings when working on a new technology?
You can't.
You have to consider it as research, and research can't be estimated.
I would give myself a set period of time to experiment with and learn the new technology before promising to deliver anything on a specific date.
After that first period, make some rough estimates, and make sure your superiors know how rough they really are.
Use Hofstadter's Law:
It always takes longer than you
expect, even when you take into
account Hofstadter's Law.
When I worked in a project to switch a medium sized development team to .Net the only way that a full conversion could be estimated was to allow an initial research phase. This allowed some developers to get familiar with the technology and implement, in full, a small part of the functionality. I found it very important that the part of the system that was worked on was completed to a production standard.
Something that was also discussed was hiring a consultant who was familiar with the technology. This was decided against because of cost but I think it would have been enourmously helpful to have someone who had experience of .NET projects to point us in the right direction.
The only other thing to add is that when you work on a project of this nature it is important to also estimate how long it will take to bring other developers up to speed. Obviously this will be less than the time that the research phase took. Though the developers who worked on the prototype should be on hand to help those who are now picking up the new technology.
To sum up:
You need to give yourself time to pick up the new tchnology before you can give real estimates.
You need to baseyour estimates on experience of a full project, to a production standard.
Don't be afraid of hiring a contractor with experience in order to learn best practices quickly.
Dont forget that everyone needs to learn this technology before they are let loose on procduction code.
I also recommend looking at this thread: Does anyone work with Function Points?
Function Points are an "industry standard" (whatever that means) for estimating how long it takes to do something. For a most part they try to map out what the program does, and THEN you put them into an algorythm like this:
long GetManHoursForProject()
{
long Count_of_Function_Points = GetFunctionPointCountFromAnalyticalPhaseOfSDLC();
double Average_Complexity = 1; // .8 for easy, 1 for normal, 1.2 for hard
long Programming_Language = 130; // for C++ (higher level languages have higher values)
double Man_Months = Count_of_Function_Points * Programming_Language * Average_Complexity;
long Man_Hours = Man_Months * 20 * 8; // 20 days per month, 8 hours per day
return Man_Hours;
}
The thread I linked to from above talks about Story Board Points, which is an interresting conversation in and among itself. I would look into both of these subjects to find which one works for you.
The nice thing about function points and story board points is that they have a language multiplier. The same way of thinking is used for all languages.
If you are learning a new language, then the complexity would be higher for your specific system.
I usually estimate the time spent learning and the time spent implementing separately. I.e. I estimate the project as if I knew what I was doing based on it's perplexity, but then try to estimate the time it might take me to learn the new technology.
Not too long ago, I had to work on a project in Flex and I had never used Flex (or Flash) before. I was also forced to use a certain 3rd party library of widgets in this Flex application. I estimated how long I thought it would take in a reasonable language like Java, then approximately doubled it to account for learning a new language. The problem was, Flex isn't reasonable, it isn't documented, there a number of bugs in the standard library, and apparently our 3rd-party library took all the design features of the standard library to heart, because it too was very broken. We ended up with a poorly performing product with half the features and over the allotted time. Thankfully, management allowed us to continue working on it for some time (they had been changing requirements, so they owed us that much) and we got it into really good shape. It still doesn't do everything we wanted, but we hacked our way around most of the library bugs, including mitigating the worst of the performance issues (namely, instantiating a UIComponent takes a LONG TIME, so instead of doing them all on startup, we do it as-needed. This is unrelated to our 3rd party lib).
So, in short:
Always estimate lots of spin-up time for learning a new system. Beyond learning the language, you need to learn the idiosyncrasies. This is probably impossible to accurately estimate
Avoid Flex if at all possible. I can't imagine straight Flash is any better, since they share a large part of the library.
My rule of thumb is to double the time you think it will take. I've found that you will always run into some unexpected problems that will take time to resolve.
I guess for projects of a certain size, give yourself some time to make a reasonably simple yet still complete and not trivial prototype of some representative part of your project. Then you will have some time to play with the technology, and also earn valuable insights with regards to the time it takes to create stuff with it.
When I've worked with a new technology in the past (i.e. when the new tech. has been central to the delivery of the project), I've had good results with estimating it as:
New Project Time = Project Time * 1.5
... but it goes without saying, this is a rule of thumb and YMMV.
One thing you can do - besides hiring someone or not estimating - is to estimate the relative complexity of your tasks and then compare the actual implementation time to the complexity level. Over time, the ratio will converge toward a stable value.
Now, I meet a problem like yours. After read these comments here,
I think there belong to level which to want to be deep clear about new technology.
As first, just research in a short time for new technology as raw( a sketch figure) after that there have a breakdown works already, so now just prioritize and research more deep by order.
I hope this is useful.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What should be the main point to keep in mind when estimating the time for Research and Development task. Suppose I have to estimate "ABC" task using "WPF" technology and I have no experience for it, I need to some R&D for it.
Don't give an estimate until you have had time to play with the technology. Allocate a certain time (2 days, 1 week, whatever you can get from management) to understand the concepts and write some code yourself with it, to get a sense for what the development process takes and how steep the learning curve is. Then, estimate.
Pure Research Projects
Set a time or resource cap in addition to a number of interim milestones / reviews, to re-evaluate whether you can afford to continue. Ideally before embarking on the research you will have a good idea of potential benefits of succeeding. You might also want to define different grades of success and a contigency plan in case the effort will not come to fruition, before you start.
Spiral model of development will come handy.
Applying Existing Technology to a Problem
For current mainstream technologies such as WPF you might try to find out how long would it take for someone with comparable experience to learn the technology. Evidence might be collected from other people experience and available training courses.
For non-current or niche technologies you might be better off hiring a consultant or sub-contracting the job (bear in mind the difference between consultant and contractor).
Grade the project on
Keeping Status Quo - Bug Fixing - Enhancement - New Functionality - New Product - Revolutionary
scale. Each position on the scale will usually mean a factor of 2..5 of risk and effort increase. Having a reference point which is to say if it normally takes 2 days in your organisation end-to-end to fix a bug, you can gauge that an enhancement will take two to five times longer, anything between 4 to 10 days, of course coding will only be a small proportion of the this effort.
Ideally, one should not give an estimate without solid evidence. After all, an estimate is a probability, and probabilities are mathematically significant figures, not gut feelings pulled off thin air. (See "Software Estimation" by Steve McConnell for more on this.)
Unfortunately, too often we are required to provide estimates on tasks for which we have a great deal of uncertainty about the technologies that will be involved. This is the case, for example, of government grants and other non-technical scenarios. In these cases, and being pragmatic, it is good to provide an estimate even when we are not familiar with the technologies.
Techniques that I often use include uncertainty cones and timeboxed development.
Hope this helps.
The best way to approach it is consult with someone who has been there already.
His experience plus general idea of of good he is compare to your staff should give you a fair estimation.
The older the technology is - the more experienced people there will be around and more places on the web to find answer to question.
If you're researching something brand new... the data sources should be limited and I will take any estimation, and double it....
You could take a guess as to how long you think it'll take you to research the new technology and then how long it'll take for you to do the development and multiply that by two. Of course that's pretty fluffy, but usually anything involving estimating a task is pretty fluffy (well, at least I don't like to). There are so many factors involved when estimating: whether it be dealing with new technologies that could be take longer than you think, usually it involves dealing with code written by other people which could add an 'x' factor of complexity to what should be a simple task.
Usually when estimating time, it's always best to at least have a general 'spike' where you sit (whether it be by yourself, or even better with some other team member) and have a play for an hour or two (or however long you choose). This at least gives you a bit of time to have better context with what you're dealing with. When looking at the new technology, perhaps read a bit of the doco, read and play with a 'getting started' guide etc. Then when you go back to the estimation table, you will have a better idea with what you're dealing with.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I lead a small group of programmers in a university setting, having just moved into this position last year. While the majority of our team are full time employees, we have a couple of people who are traditionally graduate assistants.
The competition for these assistantships is fairly intense, as they get free graduate school tuition on top of their salary while they have the job. We require that they sign up for at least a year, though we consider ourselves lucky if they stay for two. After that, they get their master's degree and move on to bigger and better things.
As you can imagine, hiring and re-training these positions is time- and resource-intensive. To make matters worse, up to now they have typically been the sole developer working on their respective projects, with me acting in an advisory and supervisory role, so wrangling the projects themselves to fight the entropy as we switch from developer to developer is a task unto itself.
I'm tempted to bring up to the administrators the possibility of hiring a full- (and long-haul) developer to replace these two positions, but for a school in a budget crisis, paying for two half-time graduate assistants is far cheaper (in terms of salary and benefits) than paying for one full-time developer. Also, since I'm new to this position, I'd like to avoid seeming as though I'm not able to deal with what I signed up for. For the forseeable future, I don't think the practice of hiring short-term graduate assistants is going to change.
My question: What can I do to create an effective training program considering that the employees may be gone after as little as a year on the job?
How much time should I invest in training them, and how much would simply be a waste of time?
How much time should they take simply getting acclamated to our process and the project?
Are there any specific training practices or techniques that can help with this kind of situation?
Has anyone dealt with a similar situation before?
Do I worry too much, or not enough?
By the way, and for the record, we do the vast majority of our development in Perl. It's hard to find grad students who know Perl, while on the other hand everybody seems to have at least an academic understanding of Java. Hence this question which I asked a while back.
Why don't you ask the students what they find difficult and make cheat sheets, lectures, etc. for the parts of the job that they have trouble with? Maybe you need to create some introductory Perl lectures or purchase some dead trees. How about a Safari subscription at O'Reilly? I'd ask the students how they prefer to learn, though, before embarking on a training project. Everyone has different learning styles.
I'd also spend some time and capital creating a culture of professional software development at work. It'll be tough since academic programmers are often neophytes and used to kludging up solutions (I'm an academic programmer, btw) but the students will thank you in the long run. Maybe you can all go out to lunch once a week to discuss programming and other topics. You might also want to take some time to do code reviews so people can learn from each other.
With high turnover you definitely need to ensure that knowledge transfer occurs. Make sure you are using source code control and that your students understand proper commenting. I'd also make the students create brief documentation for posterity. If they are getting credit, make them turn in a writeup of their progress once a semester. You can put this in a directory in the project's repository for anyone who inherits it. As mentioned in other posts, a group wiki can really help with knowledge transfer. We use Mediawiki in our group and like it a lot.
One last thing I should add is that I find it helps to keep a list of projects for new developers that relatively easy and can be completed in a month or so. They are a great way for new people to get acclimated to your development environment.
This is a relative question, and should be taken on a case-by-case basis. If the new hire already knows Perl, you don't need to go over this piece of training (yes, you could put Perl as a mandatory prerequisite, but that would significantly limit your applicant pool), and their first bit of training should be something like fixing a bug in an existing application or walking them through an application they will maintain. Though, given that the developers are only there for a year makes me think the development styles are going to vary some (if not a lot).
Getting the new person up to speed with your process is very important, as long as your process works. In this high turnover environment, you should put a strong emphasis on documentation in your process. A Wiki is a great thing to have for this documentation, since it's centralized and any of the developers can access it. Having them try to figure out how a project works by themselves (with little to no documentation) is a waste of both their time and your time.
Perhaps I'm reading too much into the question, but if your university teaches java, why are you using Perl? Wouldn't it make more sense to use the tools that your students already know? This alone would cut the learning curve significantly. [once you eliminate the legacy code of course]
other than that, try:
break the projects up into month-sized bites
overlap the internships by at least 2 months, if not 6, so the new guy can work with and be trained by the old guy(s)
document whatever repeatable processes you have (as was suggested by Mark Nold)
if the grad students are cheaper than full-time pros, quit whinin' ;-) If not, go for the pros.
Have you considered making a "three ring binder" like Macdonalds and many other high turn over industries have? Have one folder which you can print out and hand to the new hire which shows the new hire some basics of getting up and running with Perl in your environment. This should be a "hello world", plus some basic regex and array manipulation. Lastly your manual should go on to show examples of the 5 things you find yourself doing all the time.
The example code may be authenticating users against an external security system, walking through recordsets or using ghostscript to create PDFs. Whatever they are, they should cover the basics of what you meet 80% of the time. More importantly the examples should show users how you expect the code to be written for clarity (eg: naming and approach), and give them some insight into servers and software in use and other practicalities which a generic book won't show them.
You won't get the binder right first time, but since you have a high staff turn over, you'll have plenty of time to test and improve it.
On top of this i would pick a single Perl programming book and give the new user their own copy of your three ring binder, plus "Programming Perl" to keep on their first day. At a cost of $50 per hire i'm sure it's a lot cheaper than the alternatives and you'll have them flipping burgers.... i mean cutting code in no time.
My initial couple of thoughts are that you should:
hire for the position, i.e. it's Perl-centric so make that a big part of the pre-requisites. That way you don't need that piece of training as well.
invest time in the on-boarding process, maybe use a wiki so that you can easily update it to help bringing them on-board.
Edit: Some extra points:
maybe have a chat and see if Perl can be introduced into the curriculum? If not, then make it known six months before the ads go up that applicants need to know Perl. This way you'll get people who have Perl experience and who have actively demonstrated their motivation.
can you open up some small projects so that they could be done by potential candidates during this six months?
approach the design of your large-scale projects so that they can be done in a piece meal manner. This is how The ACE Components have been done iirc.
allow a specific period for documentation and review of the work done by the departing grad student.
allow an overlap period of at least a couple of weeks where the new grad student can work with the departing grad student. They can learn the development environment and they can be guinea pigs for any updates to your wiki.
Still more to come...
HTH
cheers
That is a pickle, but it is not as uncommon in the commercial sector as you would think. I heard a statistic once that the average tenure of a programmer industry-wide is about 18-24 months. Normally I would suggest getting more experienced programmers who would require less ramp-up time and only need to be trained on the problem domain/technology updates and not the basics
I think your best bet is just to ask for about 30-50% more grad students than it will take to actually perform the job to account for the learning and ramp-up time and invest in some additional resources for testing as this environment is a recipe for mistakes since everyone is learning on the job. Also, this is probably difficult given the academic schedule, but try to stagger the start dates as much as possible to maximize the overlap between employees. Pair-programming teams of new-hires/old-hires might also help increase consistency and supplement the training without sacrificing too much productivity.
How much time should I invest in training them, and how much would simply be a waste of time?
Answer: It's not the amount of time or the amount of waste, but perhaps the approach. Would it be possible to video train - video yourself training one person and provide it as training for subsequent students/developers. You can add over time, but it does reduce your time needed to go through the same process.
How much time should they take simply getting acclamated to our process and the project?
Answer: This is all dependent on the person...min. of a day max of 2-3 weeks I would guess on avg.
Are there any specific training practices or techniques that can help with this kind of situation?
Answer: Video training (home made), having the current student/developer create/update a wiki of needed info, points of interest, etc.
Has anyone dealt with a similar situation before?
Answer: Use to be a 12-18 month avg. turnover - I would imagine that it's changed now (longer), but every company has turn-over, but perhaps not forced like yours due to the resource being students.
Do I worry too much, or not enough?
Answer: Knowledge lost through transition is a key risk area...
Is the application something you could consider open-sourcing?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!