Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
A friend was telling me the other day that there is a pyramid for the costs of fixing a problem in the software development life cycle. Where could I find this?
He was referring to the cost of fixing a problem.
For example,
To fix a problem at the requirements stage costs 1.
To fix a problem at the development stage costs 10.
To fix a problem at the testing stage costs 100
To fix a problem at the production stage costs 1000.
(These numbers are just examples)
I would be interested in seeing more about this if anyone has references.
The Incredible Rate of Diminishing Returns of Fixing Software Bugs
(Stefan Priebsh: OOP and Design Patterns: Codeworks DC in September 2009)
This is a well-known result in empirical software engineering that has been replicated and verified over and over again in countless studies. Which is very rare in software engineering, unfortunately: most software engineering "results" are basically hearsay, anecdotes, guesses, opinions, wishful thinking or just plain lies. In fact, most software engineering probably doesn't deserve the "engineering" brand.
Unfortunately, despite being one of the most solid, most scientifically and statistically sound, most heavily researched, most widely verified, most often replicated results of software engineering, it is also wrong.
The problem is that all of those studies do not control their variables properly. If you want to measure the effect of a variable, you have to be very careful to change only that one variable and that the other variables don't change at all. Not "change a few variables", not "minimize changes to other variables". "Only one" and the others "not at all".
Or, in the brilliant Zed Shaw's words: "If you want to measure something, then don't measure other shit".
In this particular case, they did not just measure in which phase (requirements, analysis, architecture, design, implementation, testing, maintenance) the bug was found, they also measured how long it stayed in the system. And it turns out that the phase is pretty much irrelevant, all that matters is the time. It's important that bugs be found fast, not in which phase.
This has some interesting ramifications: if it is important to find bugs fast, then why wait so long with the phase that is most likely to find bugs: testing? Why not put the testing at the beginning?
The problem with the "traditional" interpretation is that it leads to inefficient decisions. Because you assume you need to find all bugs during the requirements phase, you drag out the requirements phase unnecessarily long: you can't run requirements (or architectures, or designs), so finding a bug in something that you cannot even execute is freaking hard! Basically, while fixing bugs in the requirements phase is cheap, finding them is expensive.
If, however, you realize that it's not about finding the bugs in the earliest possible phase, but rather about finding the bugs at the earliest possible time, then you can make adjustments to your process, so that you move the phase in which finding bugs is cheapest (testing) to the point in time where fixing them is cheapest (the very beginning).
Note: I am well aware of the irony of ending a rant about not properly applying statistics with a completely unsubstantiated claim. Unfortunately, I lost the link where I read this. Glenn Vanderburg also mentioned this in his "Real Software Engineering" talk at the Lone Star Ruby Conference 2010, but AFAICR, he didn't cite any sources, either.
If anybody knows any sources, please let me know or edit my answer, or even just steal my answer. (If you can find a source, you deserve all the rep!)
See pages 42 and 43 of this presentation (pdf).
Unfortunately the situation is as Jörg depicts, in fact somewhat worse: most of the references cited in this document strike me as bogus, in the sense that the paper cited either is not original research, or does not contain words supporting the claim being made, or - in the case of the 1998 paper about Hughes (p54) - contains measurements that in fact contradict what is implied by the curve in p42 of the presentation: different shape of the curve, and a modest x5 to x10 factor of cost-to-fix between the requirements phase and the functional test phase (and actually decreasing in system test and maintenance).
Never heard of it being called a pyramid before, and that seems a bit upside-down to me! Still, the central thesis is widely considered to be correct. just thick about it, the costs of fixing a bug in alpha stage are often trivial. By beta stage it might take a bit more debugging and user reports. After shipping it could be very expensive. a whole new version has to be created, you have to worry about breaking in-production code and data, there may also be lost sales due to the bug?
Try this article. It uses the "cost pyramid" argument (no naming it), among others.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
...or why did they fail?
I am going to build a proof of a concept of something which could be classified as CASE, but I want to avoid some of the mistakes done before.
Thanks!
First, I think diagrams provide real value when they're small and simple. Large, highly detailed diagrams mostly waste a lot of paper, time, hard drive space, etc. A pencil and paper work quite nicely for diagrams that are small enough (and simple enough) to be useful. A software tool only helps when you're producing a diagram that's so large and complex that it's practically guaranteed to be useless.
Second, with most CASE tools, the fastest way to draw a diagram is to start by writing some (possibly simplified, mockup) code, and then "reverse engineer" the diagram from the code. Drawing the diagram directly is often slower than writing the code. To provide much real value, producing the high level diagram has to be quite a bit simpler than writing equivalent code.
When you get down to it, I've rarely seen CASE tools used as an actual "aid" to "software engineering" anyway. In most cases I've seen, the software engineering is done entirely separately, and the CASE tools were used to reverse engineer diagrams from code that was already written. The people producing the diagrams generally found them useless, and included them in reports to higher-level managers for "wow factor". The only "aid" they hoped for from the diagrams was impressing management with the complexity of what they were doing in the hope of increasing funding (some included diagrams of things like portions of the standard library, purely to add to apparent complexity).
As to how the tools failed at the software engineering part, I don't know of a single simple answer -- from what I've seen, I'd say it's more of a "death of a thousand nicks", than any single, glaring problem. If I did have to point to a single large problem, it would be that the ones I've looked at don't really take Patterns into account. Just for example, what I'd like is to work at an even higher level of abstraction, so I can point to some functionality, and play with things like "how would things look if I were to implement the following parts of that functionality as decorator classes?" Yes, I can draw one diagram with them as decorator classes, and one without, but I don't have a really quick, easy way to say "transform this entire hierarchy to move X, Y, and Z into decorator classes."
Contrast a typical CASE tool with a spreadsheet. In a spreadsheet, I can change one cell, and it will automatically recalculate how that affects anything else in the spreadsheet that depends on it. By contrast, CASE tools seem (at least to me) stuck at roughly the level of a grid control, where I can make changes in a cell, but I still have to manually track what other cells depend on that one, and what formulas to use, and calculate and modify all the affected cells by hand. Yes, if I want to print out a sheet of the right values, being able to edit them on the computer so I don't have eraser marks in the cells and such would be an improvement -- but only a small improvement, not the kind that turned personal computers from toys for a few hobbyists into a staple of essentially every business on earth.
If you look at the Wikipedia entry: http://en.wikipedia.org/wiki/Computer-aided_software_engineering then you'll see the "classic" tools from the 1990s. Having worked with many of those tools, I would suggest that the focus on commercialisation fragmented the market. Typically, not only did you pay huge sums for the tools, but then for consulting, training and run-time environments. With so many tools on offer, it was hard to build a competent team specialising on given tool.
Furthermore, it didn't help that the tools were over-sold. Promising managements unrealistic increases in productivity. There isn't any other area of IT where I've seen so much shelf-ware - products used for one project and then abandoned, often along with the project too.
The concepts of CASE live on in Eclipse and many other MDE tools. The problems of steep learning curves and fragmentation have still not been solved. Whilst the cost of the tools has reduced (to free in many cases) the training, consulting and ramp up costs are still there.
Before you expend a lot of effort on your CASE tool, have a look at the fields of MDA, MDE, DSL, even UML. Its worth browsing the OMG web site as well.
At the end of the day you should focus on the what you produce and not the tool. If you are able to automate some tasks then that's good. Building yet another CASE-like tool is a great intellectual exercise, but with minimal chances of commercial success. After all IBM, Oracle and Computer Associates have only had sporadic successes with their tools and they are still vigorously marketing them to enterprise customers.
I worked with Knowldegeware back in the early 90's. My simple answer to the demise of CASE is that as soon as you printed the model it was old. Keeping the model and the code in synch became impossible. The first target platform was MicroFocus COBOL, but then came Client-Server 94-95, followed by the internet 97-98 and nobody really wanted to use CASE with those new platforms.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things.
The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis.
My idea of an ideal tool for this purpose would:
measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution)
measure the decomposibility/flexibility of the software (more decomposable is better)
how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library.
be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code".
It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately.
If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has different goals, but perhaps that should mean there should be multiple standards, not no standards at all. This looks similar to the how a company is valued in the market.
Update: after seeing a few initial answers: It does not make sense to imagine a tool that just outputs the percentages. Are there tools that can help humans (particularly managers) in making better decisions? Or what is the sufficient statistic for making better decisions? Are these statistics available?
I really doubt there is any reliable trustworthy way of measuring individual's contribution to the solution. Sometimes rewriting some complicated legacy code that results in less lines of code, less complicated solution (smaller cyclomatic complexity etc.) can be seen as a quite significant contribution, while in other cases deleting valuable code covering edge cases that results in the same statistics (less lines of code, smaller CC etc.) is definitely something bad. It all comes down to people, trust and cooperation, individualism in the team is almost always wrong and I would rather avoid it and especially not use it as a motivation factor.
This is a research topic on its own. There are several tools that have tried to define metrics like code ownership. There are other approaches which tackle other aspect of collaborative development, for instance the trustability we can have in the code.
There has been also several studies that tried to use the information from bug trackers. For instance, to identify the developer that is the more likely to introduce bugs. But it's hard to be objective (A brilliant developer that is assigned the most critical part of the system, will still be more likely to introduce critical bugs).
It's actually hard to monetize the development tasks. What is the cost of a bug? What is the gain of refactoring? That would be however one way to estimate the contribution of a developer.
The last cool tool I saw of this kind was the Game Plugin for Hudson continuous integration system. A score is assigned to each developer according their actions
-10 if they break the build
-1 for breaking a test
+1 for fixing a test
etc.
That's again a way to somehow assess the contribution of the developer.
All in all, I do feel like what you are asking for exist, but is still very immature.
I don't think you can get a tool to evaluate your share of the project. Measuring lines of source is all very well, but what of the quality of that source? You wouldn't want someone taking the credit for 200 lines of source if the job could have been easiy done in 20...
Also, thinking of my employer for a moment, a lot of people contribute to the project in ways other than code. Immediate examples I can think of would be Project Managers and Testers - both of whom are essential, both of whom rightly deserve some credit.
Martin
The only thing that I could imagine would be a voting system. I have absolutely no idea, if that would work in your team or anywhere - but I'm sure, that you will need humans for any realistic estimation of code quality.
In Stroustrup's Book on C++ I've read once "Don't try to solve social problems with technical means".
Thinking progmatically, the attitude and the ability of a programmer could be very quickly estimated by making a code-review together and having a talk on relevant topics.
Thinking as an IT-enthusiast and as a control-freak, this shouldn't be very hard, to implement a teachable machine-learning software, which uses version-cotrol, bug-database, etc and greates real-time performanced data for each contributor. E.g. R, KNIME or WEKA could be used for this.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What should be the main point to keep in mind when estimating the time for Research and Development task. Suppose I have to estimate "ABC" task using "WPF" technology and I have no experience for it, I need to some R&D for it.
Don't give an estimate until you have had time to play with the technology. Allocate a certain time (2 days, 1 week, whatever you can get from management) to understand the concepts and write some code yourself with it, to get a sense for what the development process takes and how steep the learning curve is. Then, estimate.
Pure Research Projects
Set a time or resource cap in addition to a number of interim milestones / reviews, to re-evaluate whether you can afford to continue. Ideally before embarking on the research you will have a good idea of potential benefits of succeeding. You might also want to define different grades of success and a contigency plan in case the effort will not come to fruition, before you start.
Spiral model of development will come handy.
Applying Existing Technology to a Problem
For current mainstream technologies such as WPF you might try to find out how long would it take for someone with comparable experience to learn the technology. Evidence might be collected from other people experience and available training courses.
For non-current or niche technologies you might be better off hiring a consultant or sub-contracting the job (bear in mind the difference between consultant and contractor).
Grade the project on
Keeping Status Quo - Bug Fixing - Enhancement - New Functionality - New Product - Revolutionary
scale. Each position on the scale will usually mean a factor of 2..5 of risk and effort increase. Having a reference point which is to say if it normally takes 2 days in your organisation end-to-end to fix a bug, you can gauge that an enhancement will take two to five times longer, anything between 4 to 10 days, of course coding will only be a small proportion of the this effort.
Ideally, one should not give an estimate without solid evidence. After all, an estimate is a probability, and probabilities are mathematically significant figures, not gut feelings pulled off thin air. (See "Software Estimation" by Steve McConnell for more on this.)
Unfortunately, too often we are required to provide estimates on tasks for which we have a great deal of uncertainty about the technologies that will be involved. This is the case, for example, of government grants and other non-technical scenarios. In these cases, and being pragmatic, it is good to provide an estimate even when we are not familiar with the technologies.
Techniques that I often use include uncertainty cones and timeboxed development.
Hope this helps.
The best way to approach it is consult with someone who has been there already.
His experience plus general idea of of good he is compare to your staff should give you a fair estimation.
The older the technology is - the more experienced people there will be around and more places on the web to find answer to question.
If you're researching something brand new... the data sources should be limited and I will take any estimation, and double it....
You could take a guess as to how long you think it'll take you to research the new technology and then how long it'll take for you to do the development and multiply that by two. Of course that's pretty fluffy, but usually anything involving estimating a task is pretty fluffy (well, at least I don't like to). There are so many factors involved when estimating: whether it be dealing with new technologies that could be take longer than you think, usually it involves dealing with code written by other people which could add an 'x' factor of complexity to what should be a simple task.
Usually when estimating time, it's always best to at least have a general 'spike' where you sit (whether it be by yourself, or even better with some other team member) and have a play for an hour or two (or however long you choose). This at least gives you a bit of time to have better context with what you're dealing with. When looking at the new technology, perhaps read a bit of the doco, read and play with a 'getting started' guide etc. Then when you go back to the estimation table, you will have a better idea with what you're dealing with.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Coming from an IT background, I've been involved with software projects but I'm not a programmer. One of my biggest challenges is that having a lot of experience in IT, people often turn to me to manage projects that include software development. The projects are usually outsourced and there isnt a budget for a full time architect or PM, which leaves me in a position to evaluate the work being performed.
Where I've managed to get through this in the past, I'm (with good reason) uneasy about accepting these responsibilities.
My question is, from a perspective of being technically experienced but not in programming, how can I evaluate whether coding is written well besides just determining if it works or not? Are there methodologies, tips, tricks of the trade, flags, signs, anything that would say - hey this is junk or hey this is pretty damn good?
Great question. Should get some good responses.
Code cleanliness (indented well, file organization, folder structure)
Well commented (not just inline comments, but variables that say what they are, functions that say what they do, etc.)
Small understandable functions/methods (no crazy 300 line methods that do all sorts of things with nested if logic all over the place)
Follows SOLID principles
Is the amount of unit test code similar in size and quality as the code base of the project
Is the interface code separate from the business logic code which in turn should be separate from the infrastructure access code (email, database, web services, file system, etc.)
What does a performance analysis tool think of the code (NDepend, NDoc, NCover, etc.)
There is a lot more to this...but this gets your started.
Code has 2 primary audiences:
The people who use it
The people who develop it
So you neeed 2 simple tests:
Run the code. Can you get it to do the job it is supposed to do?
Read the code. Can you understand the general intentions of the developer?
If you can answer yes to both of these, it is great code.
When reading the code, don't worry that you are not a programmer. If code is well written / documented, even a non-programmer should be able to see guess much of what it is intended to achieve.
BTW: Great question! I wish more non-programmers cared about code quality.
First, set ground rules (that all programmers sign up to) that say what's 'good' and what isn't. Automate tests for those that you can measure (e.g. functions less than a number of lines, McCabe complexity, idioms that your coders find confusing). Then accept that 'good coding' is something you know when you see rather than something you can actually pin down with a set of rules, and allow people to deviate from the standard provided they get agreement from someone with more experience. Similarly, such standards have to be living documents, adapted in the face of feedback.
Code reviews also work well, since not all such 'good style' rules can be automatically determined. Experienced programmers can say what they don't like about inexperienced programmers' code - and you have to get the original authors to change it so that they learn from their mistakes - and inexperienced programmers can say what they find hard to understand about other people's code - and, by being forced to read other people's code, they'll also learn new tricks. Again, this will give you feedback on your standard.
On some of your specific points, complexity and function size work well, as does code coverage during repeatable (unit) testing, but that last point comes with a caveat: unless you're working on something where high quality standards are a necessity (embedded code, as an example, or safety-critical code) 100% code coverage means you're testing the 10% of code paths that are worthwhile to test and the 90% that almost never get coded wrong in the first place. Worthwhile tests are the ones that find bugs and improve maintainability.
I think it's great you're trying to evaluate something that typically isn't evaluated. There have been some good answers above already. You've already shown yourself to be more mature in dealing with software by accepting that since you don't practice development personally, you can't assume that writing software is easy.
Do you know a developer whose work you trust? Perhaps have that person be a part of the evaluation process.
how can I evaluate whether coding is written well
There are various ways/metrics to define 'well'or 'good', for example:
Delivered on time
Delivered quickly
No bugs after delivery
Easy to install
Well documented
Runs quickly
Uses cheap hardware
Uses cheap software
Didn't cost much to write
Easy to administer
Easy to use
Easy to alter (i.e. add new features)
Easy to port to new hardware
...etc...
Of these, programmers tend to value "easy to alter": because, their job is to alter existing software.
Its a difficult one and could be where your non-functional requirements will help you
specify your performance requirements: transactions per second, response time, expected DB records over time,
require the delivery to include outcome from a performance analysis tool
specify the machine the application will be running on, you should not have to upgrade your hardware to run the app
For eyeballing the code and working out whether or not its well written its tougher, the answers from #Andrew & #Chris cover it pretty much... you want code that looks good, is easy to maintain and is performant.
Summary
Use Joel Test.
Why?
Thanks for tough question. I was about to write a long answer on merits of direct and indirect code evaluation, understanding your organisational context, perspective, figuring out a process and setting a criteria for code to be good enough, and then the difference between the code being perfect and just good enough which still might mean “very impressive”. I was about to refer to Steve McConnell’s Code Complete and even suggest delegating code audit to someone impartial you can trust, who is savvy enough business and programming-wise to get a grasp of the context, perspective, apply the criteria sensibly and report results neatly back to you. I was going to recommend looking at parts of UI that are normally out of end-user reach in the same way as one would be judging quality of cleaning by checking for dirt in hard-to-reach places.
Well, and then it struck me: what is the end goal? In most, but very few edge cowboy-coding scenarios, as a result of the audit you’re likely to discover that the code is better than junk, but certainly not damn good, maybe just slightly below the good enough mark. And then what is next? There are probably going to be a few choices:
Changing the supplier.
Insisting on the code being re-factored.
Leaving things as they are and from that point on demanding better code.
Unfortunately, none of the options is ideal or very good either. Having made an investment changing supplier is costly and quite risky: part of the software conceptual integrity will be lost, your company will have to, albeit indirectly, swallow the inevitable cost of the new supplier taking over the development and going through the learning curve (exactly opposite to that most suppliers are going to tell you to try and get their foot in the door). And there is going to be a big risk of missing the original deadlines.
The option of insisting on code re-factoring isn’t perfect either. There is going to be a question of cost and it’s very likely that for various contractual and historical reasons you won’t find yourself in a good negotiation position. In any case re-writing software is likely to affect deadlines and the organisation what couldn’t do the job right the first time is very unlikely to produce much better code on the second attempt. The latter is pertinent to the third option I would be dubious of any company producing a better code without some, often significant, organisational change. Leaving things as they are not good either: a piece of rotten code unless totally isolated is going to eventually poison the rest of the source.
This brings me to the actual conclusion, or in fact two:
Concentrate on picking the right software company in a first place, since going forward options are going to be somewhat constrained.
Make use of IT and management knowledge to pick a company that is focused on attracting and retaining good developers, that creates a working environment and culture fit for production of good quality code instead of relying on the post factum analysis.
It’s needless to expand on the importance of choosing the right company in the first place as opposed to summative evaluation of delivered project; hopefully the point is already made.
Well, how do we know the software company is right? Here I fully subscribe to the philosophy evangelised by Joel Spolsky: quality of software directly depends on quality of people involved which as it has been indicated by several studies can vary by an order of magnitude. And through the workings of free markets developers end up clustered in companies based on how much a particular company cares about attracting and retaining them.
As a general rule of life, best programmers end up working with the best, good with good, average with average and cowboy coders with other cowboy coders. However, there is a caveat. Most companies would have at least one or two very good developers they care about and try their hardest to retain. These devs are always put on a frontline: to fire fight, to lure a customer, to prove the organisation potential and competence. Working amongst more than average colleagues, overstretched between multiple projects, and being treated as royalty, sadly, these star programmers very often loose touch with the reality and become prima donnas who won’t “dirty” their hands with any actual programming work.
Unfortunately, programming talent doesn’t scale and it’s unlikely that the prima donna is going to work on your project past the initial phase designed to lure and lock in you as a customer. At the end the code is going to be produced by a less talented colleague and as a result you’ll get what you’ll get.
The solution is to look for a company there developer talents are more consistent and everyone is at least good enough to produce the right quality of code. And when it comes to choosing such an organisation that’s where Joel Test comes mighty handy. I believe it’s especially suitable for application by someone who has no programming experience but good understanding of IT and management.
The more points company scores on the Joel Test the more it’s likely to attract and retain good developers and most importantly provide them with the conditions to produce quality code. And since most great devs are actually in love with programming all the need is to be teamed up, given good and supportive work environment, a credible goal (or even better incredible) and they’ll start chucking out high quality code. It’s that simple.
Well, the only thing is that company that scores full twelve points on the Joel’s Test is likely to charge more than a sweatshop that scores a mere 3 or 5 (a self-estimated industry average). However, the benefits of having the synergy of efficient operations and bespoke trouble-free software that leverage strategic organisational goals will undoubtedly produce exceptional return on investment and overcome any hurdle rates by far outweighing any project costs. I mean, at the end of the day the company's work will likely be worth the money, every penny of it.
Also hope that someone will find this longish answer worthwhile.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you have $100 in your hand right now. And have to bet on one of these options. That would you bet it on? The question is:
What is the most important factor, that determents the cost of a project.
Typing speed of the programmers.
The total amount of characters typed while programming.
The 'wc *.c' command. The end size of the c files.
The abstractions used while solving the problem.
Update: Ok, just for the record. This is the most stupid question I ever asked. The question should be. Rank the list above. Most important factor first. Which are the most important factor. I ask, because I think the character count matters. Less character to change when requirements change. The faster it's done. Or?
UPDATE: This question was discussed in Stackoverflow podcast #23. Thanks Jeff! :)
From McConnell:
http://www.codinghorror.com/blog/archives/000637.html
[For a software project], size is easily the most significant determinant of effort, cost, and schedule. The kind of software you're developing comes in second, and personnel factors are a close third. The programming language and environment you use are not first-tier influences on project outcome, but they are a first-tier influence on the estimate.
Project size
Kind of software being developed
Personnel factors
I don't think you accounted for #3 in the above list. There's usually an order of magnitude or more difference in skill between programmers, not to mention all the Peopleware issues that can affect the schedule profoundly (bad apples, bad management, etc).
None of those things are major factors in the cost of a project. What it all comes down to is how well your schedule is put together - can you deliver what you said you would deliver by a certain date. If your schedule estimates are off, well guess what, you're project is going to cost a lot more than you thought it would. In the end, it's schedule estimates all the way.
Edit: I realize this is a vote, and that I didn't actually vote on any of the choices in the question, so feel free to consider this a comment on the question instead of a vote.
I thing the largest amount on large projects are testing and fixing the bugs and fixing misinterpretation of the requirements. First you need write tests. Than you fix the code that the tests run. Than you make the manual tests. Then you must write more tests. On a large project the testing and fixing can consume 40-50% of time. If you have high quality requirements then it can be more.
Characters, file size, and typing speed can be considered of zero cost, compared to proper problem definition, design and testing. They are easily an order of magnitude more important.
The most important single factor determining the cost of a project is the scale and ambition of the vision. The second most important is how well you (your team, your management, etc.) control the inevitable temptation to expand that vision as you progress. The factors you list are themselves just metrics of the scale of the project, not what determines that scale.
Of the four options you gave, I'd go with #2 - the size of the project. A quick project for cleaning out spam is going to be generally quicker than developing a new word processor, after all.
After that I'd go with "The abstractions used while solving the problem." next - if you come up with the wrong method of solving the problem, either wrong because of the logic being bad or because of a restriction with the system - then you'll definitely spend more money on re-design and re-coding what has already been done.