Mythical man month 10 lines per developer day - how close on large projects? [closed] - metrics

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Everybody always says that they can beat the "10 lines per developer per day" from the "Mythical Man Month", and starting a project, I can usually get a couple hundred lines in in a day.
But at my previous employer, all the developers were very sharp, but it was a large project, over a million lines of code, with very onerous certification requirements, and interfacing with other multiple-million line projects. At some point, as an exercise in curiosity, I plotted lines of code in the shipping product in my group (not counting tools we developed), and sure enough, incrementally, it came to around 12 lines net add per developer per day. Not counting changes, test code, or the fact that developers weren't working on the actual project code every day.
How are other people doing? And what sort of requirements do you face (I imagine its a factor)?

On one of my current projects, in some modules, I am proud to have contributed a negative line count to the code base. Identifying which areas of code have grown unnecessary complexity and can be simplified with a cleaner and clearer design is a useful skill.
Of course some problems are inherently complex and required complex solutions, but on most large projects areas which have had poorly defined or changing requirements tend to have overly complex solutions with a higher number of issues per line.
Given a problem to solve I much prefer the solution that reduces the line count. Of course, at the start of small project I can generate many more than ten lines of code per day, but I tend not to think of the amount of code that I've written, only what it does and how well it does it. I certainly wouldn't aim to beat ten lines per day or consider it an achievement to do so.

I like this quote:
If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent".
- Edsger Dijkstra
Some times you have contributed more by removing code than adding

I think the number of lines added is highly dependent upon the state of the project, the rate of adding to a new project will be much higher than the rate of a starting project.
The work is different between the two - at a large project you usually spend most of the time figuring the relationships between the parts, and only a small amount to actually changing/adding. whereas in a new project - you mostly write... until it's big enough and the rate decreases.

You should stop using this metric, it is meaningless for the most part. Cohesion, coupling and complexity are more important metrics than lines of code.

How are other people doing?
I am the only full-time dev at our company and have written 500,000 lines of OCaml and F# code over the past 7 years, which equates to about 200 lines of code per day. However, the vast majority of that code is tutorial examples consisting of hundreds of separate projects each a few hundred lines long. Also, there is a lot of duplication between the OCaml and the F#. We are not maintaining any in-house code bases larger than 50kLOC.
In addition to developing and maintaining our own software, I have also consulted for many clients in industry over the past 7 years. For the first client, I wrote 2,000 lines of OCaml over 3 months which is 20 lines of code per day. For the next client, four of us wrote a compiler that generated millions of lines of C/C++/Python/Java/OCaml code as well as documentation in 6 months which is 2,000 lines of code per day per developer. For another client, I replaced 50kLOC of C++ with 6kLOC of F# in 6 months which is -352 lines of code per day. For yet another client, I am rewriting 15kLOC of OCaml in F# which will be the same size so 0 lines of code per day.
For our current client, I will replace 1,600,000 lines of C++ and Mathematica code with ~160kLOC of F# in 1 year (by writing a bespoke compiler) which will be -6,000 lines of code per day. This will be my most successful project to date and will save our client millions of dollars a year in on-going costs. I think everyone should aim to write -6,000 lines of code per day.

Without actually checking my copy of "The Mythical Man-Month" (everybody reading this should really have a copy readily available), there was a chapter in which Brooks looked at productivity by lines written. The interesting point, to him, was not the actual number of lines written per day, but the fact that it seemed to be roughly the same in assembler and in PL/I (I think that was the higher-level language used).
Brooks wasn't about to throw out some sort of arbitrary figure of productivity, but he was working from data on real projects, and for all I can remember they might have been 12 lines/day on the average.
He did point out that productivity could be expected to vary. He said that compilers were three times as hard as application programs, and operating systems three times as hard as compilers. (He seems to have liked using multipliers of three to separate categories.)
I don't know if he appreciated then the individual differences between programmer productivity (although in an order-of-magnitude argument he did postulate a factor of seven difference), but as we know superior productivity isn't just a matter of writing more code, but also writing the right code to do the job.
There's also the question of the environment. Brooks speculated a bit about what would make developers faster or slower. Like lots of people, he questioned whether the current fads (interactive debugging using time-sharing systems) were any better than the old ways (careful preplanning for a two-hour shot using the whole machine).
Given that, I would disregard any actual productivity number he came up with as useless; the continuing value of the book is in the principles and more general lessons that people persist in not learning. (Hey, if everybody had learned them, the book would be of historical interest only, much like all of Freud's arguments that there is something like a subconscious mind.)

It's easy to get a couple of hundred lines of code per day. But try to get a couple of hundred quality lines of code per day and it's not so easy. Top that with debugging and going through days with little or no new lines per day and the average will come down rather quickly. I've spent weeks debugging difficult issues and the answer being 1 or 2 lines of code.

It would be much better to realize that talking of physical lines of code is pretty meaningless. The number of physical Lines of Code (LoC) is so dependent on the coding style that it can vary of an order of magnitude from one developer to another one.
In the .NET world there are a convenient way to count the LoC. Sequence point. A sequence point is a unit of debugging, it is the code portion highlighted in dark-red when putting a break point. With sequence point we can talk of logical LoC, and this metric can be compared across various .NET languages. The logical LoC code metric is supported by most .NET tools including VisualStudio code metric, NDepend or NCover.
For example, here is a 8 LoC method (beginning and ending brackets sequence points are not taken account):
The production of LoC must be counted in the long term. Some days you'll spit more than 200 LoC, some others days you'll spend 8 hours fixing a bug by not even adding a single LoC. Some days you'll clean dead code and will remove LoC, some days you'll spend all your time refactoring existing code and not adding any new LoC to the total.
Personally, I count a single LoC in my own productivity score only when:
It is covered by unit-tests
it is associated to some sort of code contract (if possible, not all LoC of course can be checked by contracts).
In this condition, my personal score over the last 5 years coding the NDepend tool for .NET developers is an average of 80 physical LoC per day without sacrificing by any mean the code quality. The rhythm is sustained and I don't see it decreased any time soon. All in all, NDepend is a C# code base that currently weights around 115K physical LoC
For those who hates counting LoC (I saw many of them in comments here), I attest that once adequately calibrated, counting LoC is an excellent estimation tool. After coding and measuring dozens of features achieved in my particular context of development, I reached the point where I can estimate precisely the size of any TODO feature in LoC, and the time it'll take me to deliver it to production.

There is no such thing as a silver bullet.
A single metric like that is useless by itself.
For instance, I have my own class library. Currently, the following statistics are true:
Total lines: 252.682
Code lines: 127.323
Comments: 99.538
Empty lines: 25.821
Let's assume I don't write any comments at all, that is, 127.323 lines of code. With your ratio, that code library would take me around 10610 days to write. That's 29 years.
I certainly didn't spend 29 years writing that code, since it's all C#, and C# hasn't been around that long.
Now, you can argue that the code isn't all that good, since obviously I must've surpassed your 12 lines a day metric, and yes, I'll agree to that, but if I'm to bring the timeline down to when 1.0 was released (and I didn't start actually making it until 2.0 was released), which is 2002-02-13, about 2600 days, the average is 48 lines of code a day.
All of those lines of code are good? Heck no. But down to 12 lines of code a day?
Heck no.
Everything depends.
You can have a top notch programmer churning out code in the order of thousands of lines a day, and a medium programmer churning out code in the order of hundreds of lines a day, and the quality is the same.
And yes, there will be bugs.
The total you want is the balance. Amount of code changed, versus the number of bugs found, versus the complexity of the code, versus the hardship of fixing those bugs.

Steve McConnell gives an interesting statistic in his book "Software Estimation" (p62 Table 5.2)
He distinguish between project types (Avionic, Business, Telco, etc) and project size 10 kLOC, 100 kLOC, 250 kLOC. The numbers are given for each combination in LOC/StaffMonth.
E.G.
Avionic: 200, 50, 40
Intranet Systems (Internal): 4000, 800, 600
Embedded Systems: 300, 70, 60
Which means:
eg. for Avionic 250-kLOC project there are 40 (LOC/Month) / 22 (Days/Month) == <2LOC/day!

I think this comes from from the waterfall development days, where the actual development phase of a project could be as little as 20-30% of the total project time. Take the total lines of code and divide by the entire project time and you'll get around 10 lines/day. Divide by just the coding period, and you'll get closer to what people are quoting.

Our codebase is about 2.2MLoC for about 150 man-years effort. That makes it about 75 lines of c++ or c# per developer per day, over the whole life of the project.

I think project size and the number of developers involved are big factors in this. I'm far above this over my career but I've worked alone all that time so there's no loss to working with other programmers.

Good planning, good design and good programmers. You get all that togheter and you will not spend 30 minutes to write one line.
Yes, all projects require you to stop and plan,think over,discuss, test and debug but at two lines per day every company would need an army to get tetris to work...
Bottom line, if you were working for me at 2 lines per hours, you'd better be getting me a lot of coffes andmassaging my feets so you didn't get fired.

One suspects this perennial bit of manager-candy was coined when everything was a sys app written in C because if nothing else the magic number would vary by orders of magnitude depending on the language, scale and nature of the application. And then you have to discount comments and attributes. And ultimately who cares about the number of lines of code written? Are you supposed to be finished when you've reach 10K lines? 100K? So arbitrary.
It's useless.

Related

Calculating Project Programming Times [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.

Number of lines of code in a lifetime [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
One of the companies required from its prospective employee to give the number of lines of code written in the life time in a certain programming language like Java, or C#. Since, most of us have a number of years of experience in different projects in multiple languages and we hardly keep record of this, what would be the best approach to calculate this metrics. I am sure the smart members of stackoverlow.com will have some ideas.
This is a very respected company in its domain and I am sure they have some very good reason to ask this question. But what makes it also difficult to answer is the type of code to consider. Should I only include the difficult algorithm that I implemented or any code I wrote for e.g. a POJO that had 300 properties and whose getters/setters were generated using IDEs!
The best response to such a question is one of the following:
Why do you want to know?
What meaning would you attribute to such a number?
Is it OK if I just up and leave just about now?
I would seriously question the motives behind anyone asking such a question either of current or prospective employees. It is most likely the same type of company that would start doing code reviews focusing on the number of lines of code you type.
Now, if they argue that the number of lines of code is a measure of the experience of a programmer, then I would definitely leave the interview at that point.
Simple solutions can be found for complex problems, and are typically better than just throw enough lines of code at the problem and it'll sort itself out. Since the number of bugs produced scales linearly and above with the number of statements, I would say that the inverse is probably better, combined with the number of problems they've tackled.
As a test-response, I would ask this:
If in a program I am able to solve problem A, B and C in 1000 lines of code, and another programmer solves the same problems in 500 lines of code, which of us is the best (and the answer would be: not enough information to judge)
Now, if you still want to estimate the number of lines, I would simply start thinking about the projects the person has written, and compare their size with a known quantity. For instance, I have a class library that currently ranges about 130K lines of code, and I've written similar things in Delphi and other languages, plus some sizable application projects, so I would estimate that I have a good 10 million lines of code on my own at least. Is the number meaningful? Not in the slightest.
Sounds like this is D E Shaw's questionnaire?
This seems like one of those questions like 'How many ping-pong balls could you fit in a Boeing 747?' In that case, the questioner wants to see you demonstrate your problem solving skills more than know how many lines of code you've actually written. I would be careful not to respond with any criticism of the question, and instead honestly try to solve the problem ; )
Take a look at ohloh. The site shows metrics from open source projects.
The site estimates that 107,187 lines of code corresponds to an effort of 27 Person Years (4000 lines of code per year).
An example of the silliness of such a metric is that the number is from a project I've been toying with outside work during 2 years.
There are basically three ways of dealing with ridiculous requests for meaningless metrics.
Refuse to answer, challenging the questioner for their reasons and explaining why those reasons are silly.
Spending time gathering all the information you can, and calculating the answer to the best of your ability.
Making up a plausible answer, and moving on with as little emotional involvement possible in the stupidity as possible.
The first answers I see seem to be taking the first line. Think about whether you still want the job despite the stupidity of their demands. If the answer is still Yes, avoid number 1.
The second method would involve looking at your old code repositories from old projects.
In this case, I would go with the third way.
Multiply the number of years you have worked on a language by 200 work days per year, by 20 lines of code a day, and use that.
If you are claiming more than one language per year, apportion it out between them.
If you have been working more on analysis, design or management, drop the figure by three quarters.
If you have been working in a high-ceremony environment (defence, medicine), drop the figure by an order of magnitude.
If you have been working on an environment with particularly low ceremony, increase it by an order of magnitude.
Then put the stupidity behind you and get on with your life as quickly as possible
Depending on what they do with the answer, I don't think this is a bad question. For example, if a candidate puts JavaScript on their resume, I want to know how much JavaScript have they actually written. I may ask, for example, for the number of lines in the largest JavaScript project they've written. But I'm only looking for a sense of scale, not an actual number. Is it 10, 100, 1000, or 10,000 lines?
When I ask, I'll make very clear that I'm just looking for a crude number to gauge the size of the project. I hope the employer in the questioner's case is after the same.
It is an interesting metric to ask for considering you could write many many lines of bad code instead of writing just a few smart ones.
I can only assume they are considering more lines to be better than fewer. Would it be better to not plan at all and just start writing code, That would be a great way to write more lines of code, since at least if I do that I usually end up writing everything at least twice.
Smart of stack overflowers would generally avoid organization that ask this kind of question. Unless the correct answer is "huh, wtf??"
If you were to be truly honest then you'd say that you don't know because you have never viewed it as a valid metric. If the interviewer is a reasonable/rational person, then this is the answer they are looking for.
The only other option to saying you don't know is to guess, and that really isn't demonstrating problem solving skills.
Why bother calculating this metric without a good reason? And some random company asking for the metric really isn't a good reason.
If the company's question is actually serious, and you think the interview might lead to something interesting, then I would just pick a random number in order to see where that leads :-)
Ha, reminds me when I took over a C based testing framework, which started out as 20K+
lines that I ended up collapsing into 1K LOC by factoring down to a subroutine instead
of the 20K lines of diarrea code originally written by the original author. Unfortunately,
I got spanked harder for any errors in the code as my KLOC's written actually went
negative... I would think long and hard about shrinking the code base in a metrics driven organization....
Even if I agree with the majority in saying that this is not a really good metric, if it's a serious compmany, as you say, they may have their reasons to ask this.. This is what I would probably do:
Take one of your existing project, get the number of lines and divide it by the time it took you to code it. This will give you a kind of lines per hour metric. Then, try to estimate how many time you have worked with that specific language and multiply it with your already calculated metric. I honestly don't think it's a great way.. but honest, this isn't a great question neither.. I would also tell the company the strategy I used to come up with this number.. maybe, MAYBE, this is what they want.. to know your opinion about this question and how you would answered it? :p
Or, they just want to know if you have some experiences.. so, guess an impressive number and write it down :D
"This is a very respected company in its domain and I am sure they have some very good reason to ask this question"
And I am very sure they don't, because "being respected" does not mean "they do everything right", because this is certainly not right, or if it is, then it's at least dumb in my opinion.
What does count as "Lines of Code"? I estimate that I have written around 250.000 Lines of C# Code, possibly a lot more. The Problem? 95% was throwaway code, and not all was for learning. I still find myself writing a small 3-line program for the tenth time simply because it's easier to write those three lines again (and change a parameter) than go search for the existing ones.
Also, the lines of code means nothing. So I have two guys, one has written 20% more Lines that the other one, but those 20% more were unnecessary complicated lines, "loop-unrolling" and otherwise useless stuff that could have been refactored out.
So sorry, respected company or not: Asking for Lines of Code is a sure sign that they have no clue about measuring the efficiency of their programmers, which means they have to rely on stone-age techniques like measuring the LoC that are about as accurate as calendars in stone-age. Which means it's possibly a good place to work in if you like to slack off and inflate your Numbers every once in a while.
Okay, that was more a rant than an answer, but I really see absolutely no good reason for this number whatsoever.
And nobody has yet cited the Bill Atkinson -2000 lines story...
In my Friday afternoon (well, about one Friday per month) self-development exercises at work over the past year, tests, prototypes and infrastructure included, I've probably written about 5 kloc. However one project took an existing 25kloc C/C++ application and reimplemented it as 1100 lines of Erlang, and another took 15kloc of an existing C library and turned it into 1kloc of C++, so the net is severely negative. And the only reason I have those numbers was that I was looking to see how negative.
I know this is an old post, but this might be useful to someone anyway...
I recently moved on from a company I worked at for roughly 9.5 years as a Java developer. All our code was in CVS, then SVN, with Atlassian Fisheye providing a view into it.
When I left, Fisheye was reporting my personal, total LOC as +- 250,000. Here's the Fisheye description of its LOC metric, including the discussion on how each SVN user's personal LOC is calculated. Note the issues with branching and merging in SVN, and that LOC should usually only be based on TRUNK.

How long does it really take to do something? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.

When, if ever, is "number of lines of code" a useful metric? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Some people claim that code's worst enemy is its size, and I tend to agree. Yet every day you keep hearing things like
I write blah lines of code in a day.
I own x lines of code.
Windows is x million lines of code.
Question: When is "#lines of code" useful?
ps: Note that when such statements are made, the tone is "more is better".
I'd say it's when you're removing code to make the project run better.
Saying you removed "X number of lines" is impressive. And far more helpful than you added lines of code.
I'm surprised nobody has mentioned Dijkstra's famous quote yet, so here goes:
My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
The quote is from an article called "On the cruelty of really teaching computing science".
It's a terrible metric, but as other people have noted, it gives you a (very) rough idea of the overall complexity of a system. If you're comparing two projects, A and B, and A is 10,000 lines of code, and B is 20,000, that doesn't tell you much - project B could be excessively verbose, or A could be super-compressed.
On the other hand, if one project is 10,000 lines of code, and the other is 1,000,000 lines, the second project is significantly more complex, in general.
The problems with this metric come in when it's used to evaluate productivity or level of contribution to some project. If programmer "X" writes 2x the number of lines as programmer 'Y", he might or might not be contributing more - maybe "Y" is working on a harder problem...
When bragging to friends.
At least, not for progress:
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” --Bill Gates
There is one particular case when I find it invaluable. When you are in an interview and they tell you that part of your job will be to maintain an existing C++/Perl/Java/etc. legacy project. Asking the interviewer how many KLOC (approx.) are involved in the legacy project will give you a better idea as to whether you want their job or not.
It's useful when loading up your line printer, so that you know how many pages the code listing you're about to print will consume. ;)
Reminds me of this:
The present letter is a very long one, simply because I had no leisure to make it shorter.
--Blaise Pascal.
like most metrics, they mean very little without a context. So the short answer is: never (except for the line printer, that's funny! Who prints out programs these days?)
An example:
Imagine that you're unit-testing and refactoring legacy code. It starts out with 50,000 lines of code (50 KLOC) and 1,000 demonstrable bugs (failed unit tests). The ratio is 1K/50KLOC = 1 bug per 50 lines of code. Clearly this is terrible code!
Now, several iterations later, you have reduced the known bugs by half (and the unknown bugs by more than that most likely) and the code base by a factor of five through exemplary refactoring. The ratio is now 500/10000 = 1 bug per 20 lines of code. Which is apparently even worse!
Depending on what impression you want to make, this can be presented as one or more of the following:
50% less bugs
five times less code
80% less code
60% worsening of the bugs-to-code ratio
all of these are true (assuming i didn't screw up the math), and they all suck at summarizing the vast improvement that such a refactoring effort must have achieved.
To paraphrase a quote I read about 25 years ago,
"The problem with using lines of code as a metric is it measures the complexity of the solution, not the complexity of the problem".
I believe the quote is from David Parnas in an article in the Journal of the ACM.
There are a lot of different Software Metrics. Lines of code is the most used and is the easiest to understand.
I am surprised how often the lines of code metric correlates with the other metrics. In stead of buying a tool that can calculate cyclomatic complexity to discover code smells, I just look for the methods with many lines, and they tend to have high complexity as well.
A good example of use of lines of code is in the metric: Bugs per lines of code. It can give you a gut feel of how many bugs you should expect to find in your project. In my organization we are usually around 20 bugs per 1000 lines of code. This means that if we are ready to ship a product that has 100,000 lines of code, and our bug database shows that we have found 50 bugs, then we should probably do some more testing. If we have 20 bugs per 1000 lines of code, then we are probably approaching the quality that we usually are at.
A bad example of use is to measure developer productivity. If you measure developer productivity by lines of code, then people tend to use more lines to deliver less.
Answer: when you can talk about negative lines of code. As in: "I removed 40 extraneous lines of code today, and the program is still functioning as well as before."
I'd agree that taking the total number of lines of code in a project is one way to measure complexity.
It's certainly not the only measure of complexity. For example debugging a 100 line obfuscated Perl script is much different from debugging a 5,000 line Java project with comment templates.
But without looking at the source, you'd usually think more lines of code is more complex, just as you might think a 10MB source tarball is more complex than a 15kb source tarball.
It is useful in many ways.
I don't remember the exact # but Microsoft had a web cast that talked about for every X lines of code on average there are y number of bugs. You can take that statement and use it to give a baseline for several things.
How well a code reviewer is doing their job.
judging skill level of 2 employees by comparing their bug ratio's over several projects.
Another thing we look at is, why is it so many lines? Often times when a new programmer is put in a jam they will just copy and paste chunks of code instead of creating functions and encapsulating.
I think that the I wrote x lines of code in a day is a terrible measure. It take no account for difficulty of problem, language your writing in, and so on.
It seems to me that there's a finite limit of how many lines of code I can refer to off the top of my head from any given project. The limit is probably very similar for the average programmer. Therefore, if you know your project has 2 million lines of code, and your programmers can be expected to be able to understand whether or not a bug is related to the 5K lines of code they know well, then you know you need to hire 400 programmers for your code base to be well covered from someone's memory.
This will also make you think twice about growing your code base too fast and might get you thinking about refactoring it to make it more understandable.
Note I made up these numbers.
The Software Engineering Institute's Process Maturity Profile of the Software Community: 1998 Year End Update (which I could not find a link to, unfortunately) discusses a survey of around 800 software development teams (or perhaps it was shops). The average defect density was 12 defects per 1000 LOC.
If you had an application with 0 defects (it doesn't exist in reality, but let's suppose) and wrote 1000 LOC, on average, you can assume that you just introduced 12 defects into the system. If QA finds 1 or 2 defects and that's it, then they need to do more testing as there are probably 10+ more defects.
It's a metric of productivity, as well as complexity. Like all metrics, it needs to be evaluated with care. A single metric usually is not sufficient for a complete answer.
IE, a 500 line program is not nearly as complex as a 5000 line. Now you have to ask other questions to get a better view of the program...but now you have a metric.
It's a great metric for scaring/impressing people. That's about it, and definitely the context I'm seeing in all three of those examples.
Lines of code are useful to know when you're wondering if a code file is getting too large. Hmmm...This file is now 5000 lines of code. Maybe I should refactor this.
When you have to budget for the number of punch cards you need to order.
I wrote 2 blog post detailling the pro and cons of counting Lines of Code (LoC):
How do you count your number of Lines Of Code (LOC) ? : The idea is to explain that you need to count the logical number of lines of code instead of a physical count. To do so you can use tools like NDepend for example.
Why is it useful to count the number of Lines Of Code (LOC) ?: The idea is that LoC should never be used to measure productivity, but more to do test coverage estimation and software deadline estimation.
As most people have already stated, it can be an ambiguous metric, especially if you are comparing people coding in different languages.
5,000 lines of Lisp != 5,000 lines of C
Always. Bunch o'rookies on this question. Masters write code prolifically and densely. Good grads write lots of lines but too much fluff. Crappers copy lines of code. So, first do a Tiles analysis or gate, of course.
LoC must be used if your org doesn't do any complexity points, feature points/function points, commits, or other analysis.
Any developer who tells you not to measure him or her by LoC is shite. Any master cranks code our like you would not believe. I've worked with a handful who are 20x to 200x as productive as the average programmer. And their code is very, very, very compact and efficient. Yes, like Dijkstra, they have enormous mental models.
Finally, in any undertaking, most people are not good at it and most doing it are not great. Programming is no different.
Yes, do a hit analysis on any large project and find out 20% plus is dead code. Again, master programmers regularly annihilate dead code and crapcode.
When you are refactoring a code base and can show that you removed lines of code, and all the regression tests still passed.
Lines of code isn't so useful really, and if it is used as a metric by management it leads to programmers doing a lot of refactoring to boost their scores. In addition poor algorithms aren't replaced by neat short algorithms because that leads to negative LOC count which counts against you. To be honest, just don't work for a company that uses LOC/d as a productivity metric, because the management clearly doesn't have any clue about software development and thus you'll always be on the back foot from day one.
In competitions.
When the coder doesn't know you are counting lines of code, and so has no reason to deliberately add redundant code to game the system. And when everyone in the team has a similar coding style (so there is a known average "value" per line.) And only if you don't have a better measure available.
They can be helpful to indicate the magnitude of an application - says nothing about quality! My point here is just that if you indicate you worked on an application with 1,000 lines and they have an application that is 500k lines (roughly), a potential employer can understand if you have large-system experience vs. small utility programming.
I fully agree with warren that the number of lines of code you remove from a system is more useful than the lines you add.
Check out wikipedia's definition: http://en.wikipedia.org/wiki/Source_lines_of_code
SLOC = 'source lines of code'
There is actually quite a bit of time put into these metrics where I work. There are also different ways to count SLOC.
From the wikipedia article:
There are two major types of SLOC
measures: physical SLOC and logical
SLOC.
Another good resource: http://www.dwheeler.com/sloc/
It is a very usefull idea when it is associated with the number of defects. "Defects" gives you a measure of code quality. The least "defects" the better the software; It is nearly impossible to remove all defects. In many occasions, a single defect could be harmfull and fatal.
However, it does not seem that nondefective software exists.

How Much Time Should be Allotted for Testing & Bug Fixing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!

Resources