What factor determines the cost of a software project? [closed] - project-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you have $100 in your hand right now. And have to bet on one of these options. That would you bet it on? The question is:
What is the most important factor, that determents the cost of a project.
Typing speed of the programmers.
The total amount of characters typed while programming.
The 'wc *.c' command. The end size of the c files.
The abstractions used while solving the problem.
Update: Ok, just for the record. This is the most stupid question I ever asked. The question should be. Rank the list above. Most important factor first. Which are the most important factor. I ask, because I think the character count matters. Less character to change when requirements change. The faster it's done. Or?
UPDATE: This question was discussed in Stackoverflow podcast #23. Thanks Jeff! :)

From McConnell:
http://www.codinghorror.com/blog/archives/000637.html
[For a software project], size is easily the most significant determinant of effort, cost, and schedule. The kind of software you're developing comes in second, and personnel factors are a close third. The programming language and environment you use are not first-tier influences on project outcome, but they are a first-tier influence on the estimate.
Project size
Kind of software being developed
Personnel factors
I don't think you accounted for #3 in the above list. There's usually an order of magnitude or more difference in skill between programmers, not to mention all the Peopleware issues that can affect the schedule profoundly (bad apples, bad management, etc).

None of those things are major factors in the cost of a project. What it all comes down to is how well your schedule is put together - can you deliver what you said you would deliver by a certain date. If your schedule estimates are off, well guess what, you're project is going to cost a lot more than you thought it would. In the end, it's schedule estimates all the way.
Edit: I realize this is a vote, and that I didn't actually vote on any of the choices in the question, so feel free to consider this a comment on the question instead of a vote.

I thing the largest amount on large projects are testing and fixing the bugs and fixing misinterpretation of the requirements. First you need write tests. Than you fix the code that the tests run. Than you make the manual tests. Then you must write more tests. On a large project the testing and fixing can consume 40-50% of time. If you have high quality requirements then it can be more.

Characters, file size, and typing speed can be considered of zero cost, compared to proper problem definition, design and testing. They are easily an order of magnitude more important.

The most important single factor determining the cost of a project is the scale and ambition of the vision. The second most important is how well you (your team, your management, etc.) control the inevitable temptation to expand that vision as you progress. The factors you list are themselves just metrics of the scale of the project, not what determines that scale.

Of the four options you gave, I'd go with #2 - the size of the project. A quick project for cleaning out spam is going to be generally quicker than developing a new word processor, after all.
After that I'd go with "The abstractions used while solving the problem." next - if you come up with the wrong method of solving the problem, either wrong because of the logic being bad or because of a restriction with the system - then you'll definitely spend more money on re-design and re-coding what has already been done.

Related

Software Engineering - Time Schedules, Projects Estimates [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
My question is how someone can really estimate the duration of a project in terms of hours?
This is my overall problem during my carreer.
Let's say you need a simple app with customer/orders forms and some inventory capacity along with user/role privileges.
If you'd go with Java/Spring etc you should have X time.
If you'd go with ASP.NET you should have Y time.
If you would go with PHP or something else you would get another time
right?
All say break the project in smaller parts. I agree. But then what how do you estimate that the user/role takes so much time?
The problem gigles up when you implement things that let's say are 'upgraded' ( Spring 2.5 vs Spring 3.0 has a lot of differences for example ).
Or perhaps you meet things that you can't possibly know as it is new and always you meet new things!Sometimes I found odd behaviours like some .net presentation libraries that gave me the jingles! and could cost me 2 nights of my life for something that was perhaps a type error in some XML file...
Basically what I see as a problem, is that there is no standardized schedule on that things? A time and cost pre-estimated evaluation? It is a service not a product. So it scales to the human ability.
But what are the average times for average things for average people?
We have lots and lots of cost estimation models that give us a schedule estimate, COCOMO 2 being a good example. although more or less as you said, a type error can cost you 2 nights. so whats the solution
in my view
Expert judgement is serving the industry best and will continue to do
so inspite of various cost estimation techniques springing up as the
judgement is more or less keeping in mind the overheads that you
might have faced doing such projects in past
some models give you direct mapping between programming language LOC
per function point but that is till a higher level and does not say
what if a new framework is intorduced (as you mentioned, switching
from spring 2.5 to 3.0)
as you said, something new keeps coming up always, so i think thats
where expert judgement again comes into play. experience can help you
determine what time you might end up working more.
so, in my view, it should be a mix of estimation technique with a expert judgement of overheads that might occur with the project. more or less, you will end up with a figure very near to your actual effort.
My rule of thumb is to always keep the estimation initially language agnostic. Meaning, do NOT create different estimations for different implementation languages. This does not make sense, as a client will express to you requirements, in a business sense.
This is the "what". The "how" is up to you, and the estimate given to a client should not give the choice to them. The "how" is solely your domain, as an analyst. This you will have to arrive upon, by analyzing the demands of the business, then find what experiences you have in the teams, your own time constraints, your infrastructure and so on. Then deliver an estimate in BOTH time, and the tech you assume you will use.
Programming languages and algorithms are just mediators to achieve business need. Therefore the client should not be deciding this, so you shall only give one estimate, where you as an analyst have made the descision on what to use, given the problem at hand and your resources.
I follow these three domains as a rule:
The "what" - Requirements, they should be small units of CLEAR scoping, they are supplied by the client
The "how" - Technical architecture, the actual languages and tech used, infrastructure, and team composition. Also the high level modeling of how the system should integrate and be composed (relationship of components), this is supplied by the analyst with help of his desired team
The "when" - This is your actual time component. This tells the client when to expect each part of the "what". When will requirement or feature x be delivered. This is deducted by the analyst, the team AND the client together based on the "how" and "what" together with the clients priorities. Usually i arrive at the sums based on what the team tells me (technical time), what i think (experience) and what the client wants (priority). The "when" is owned by all stakeholders (analyst/CM, team and client).
The forms to describe the above can vary, but i usually do high level use case-models that feed backlogs with technical detailing. These i then put a time estimate on, first with the team, then i revise with the client.
The same backlog can then be used to drive development sprints...
Planning / estimation is one of the most difficult parts in software engineering.
Then:
- Split the work in smaller parts
- Invite some team members (about 5-8),
- Discuss what is meant with each item
- Let them fill in a form how many hours each item is, don't discuss or let them look at others
- Then for each item, check the average and discuss if there is a lot of variance (risks?)
- Remove the lowest and highest value per item
- Take the average of the rest
This works best for work that is based on experience, for new projects with completely new things to do it is always more difficult to plan.
Hope this helps.
There is no short easy answer to your question. Best I can do is to refer you to a paper which discusses some of the different models and techniques used for cost analysis.
Software Cost Estimation - F J Heemstra

Software Development Costs Pyramid [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
A friend was telling me the other day that there is a pyramid for the costs of fixing a problem in the software development life cycle. Where could I find this?
He was referring to the cost of fixing a problem.
For example,
To fix a problem at the requirements stage costs 1.
To fix a problem at the development stage costs 10.
To fix a problem at the testing stage costs 100
To fix a problem at the production stage costs 1000.
(These numbers are just examples)
I would be interested in seeing more about this if anyone has references.
The Incredible Rate of Diminishing Returns of Fixing Software Bugs
(Stefan Priebsh: OOP and Design Patterns: Codeworks DC in September 2009)
This is a well-known result in empirical software engineering that has been replicated and verified over and over again in countless studies. Which is very rare in software engineering, unfortunately: most software engineering "results" are basically hearsay, anecdotes, guesses, opinions, wishful thinking or just plain lies. In fact, most software engineering probably doesn't deserve the "engineering" brand.
Unfortunately, despite being one of the most solid, most scientifically and statistically sound, most heavily researched, most widely verified, most often replicated results of software engineering, it is also wrong.
The problem is that all of those studies do not control their variables properly. If you want to measure the effect of a variable, you have to be very careful to change only that one variable and that the other variables don't change at all. Not "change a few variables", not "minimize changes to other variables". "Only one" and the others "not at all".
Or, in the brilliant Zed Shaw's words: "If you want to measure something, then don't measure other shit".
In this particular case, they did not just measure in which phase (requirements, analysis, architecture, design, implementation, testing, maintenance) the bug was found, they also measured how long it stayed in the system. And it turns out that the phase is pretty much irrelevant, all that matters is the time. It's important that bugs be found fast, not in which phase.
This has some interesting ramifications: if it is important to find bugs fast, then why wait so long with the phase that is most likely to find bugs: testing? Why not put the testing at the beginning?
The problem with the "traditional" interpretation is that it leads to inefficient decisions. Because you assume you need to find all bugs during the requirements phase, you drag out the requirements phase unnecessarily long: you can't run requirements (or architectures, or designs), so finding a bug in something that you cannot even execute is freaking hard! Basically, while fixing bugs in the requirements phase is cheap, finding them is expensive.
If, however, you realize that it's not about finding the bugs in the earliest possible phase, but rather about finding the bugs at the earliest possible time, then you can make adjustments to your process, so that you move the phase in which finding bugs is cheapest (testing) to the point in time where fixing them is cheapest (the very beginning).
Note: I am well aware of the irony of ending a rant about not properly applying statistics with a completely unsubstantiated claim. Unfortunately, I lost the link where I read this. Glenn Vanderburg also mentioned this in his "Real Software Engineering" talk at the Lone Star Ruby Conference 2010, but AFAICR, he didn't cite any sources, either.
If anybody knows any sources, please let me know or edit my answer, or even just steal my answer. (If you can find a source, you deserve all the rep!)
See pages 42 and 43 of this presentation (pdf).
Unfortunately the situation is as Jörg depicts, in fact somewhat worse: most of the references cited in this document strike me as bogus, in the sense that the paper cited either is not original research, or does not contain words supporting the claim being made, or - in the case of the 1998 paper about Hughes (p54) - contains measurements that in fact contradict what is implied by the curve in p42 of the presentation: different shape of the curve, and a modest x5 to x10 factor of cost-to-fix between the requirements phase and the functional test phase (and actually decreasing in system test and maintenance).
Never heard of it being called a pyramid before, and that seems a bit upside-down to me! Still, the central thesis is widely considered to be correct. just thick about it, the costs of fixing a bug in alpha stage are often trivial. By beta stage it might take a bit more debugging and user reports. After shipping it could be very expensive. a whole new version has to be created, you have to worry about breaking in-production code and data, there may also be lost sales due to the bug?
Try this article. It uses the "cost pyramid" argument (no naming it), among others.

How to estimate the contribution of an individual to a software project? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things.
The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis.
My idea of an ideal tool for this purpose would:
measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution)
measure the decomposibility/flexibility of the software (more decomposable is better)
how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library.
be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code".
It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately.
If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has different goals, but perhaps that should mean there should be multiple standards, not no standards at all. This looks similar to the how a company is valued in the market.
Update: after seeing a few initial answers: It does not make sense to imagine a tool that just outputs the percentages. Are there tools that can help humans (particularly managers) in making better decisions? Or what is the sufficient statistic for making better decisions? Are these statistics available?
I really doubt there is any reliable trustworthy way of measuring individual's contribution to the solution. Sometimes rewriting some complicated legacy code that results in less lines of code, less complicated solution (smaller cyclomatic complexity etc.) can be seen as a quite significant contribution, while in other cases deleting valuable code covering edge cases that results in the same statistics (less lines of code, smaller CC etc.) is definitely something bad. It all comes down to people, trust and cooperation, individualism in the team is almost always wrong and I would rather avoid it and especially not use it as a motivation factor.
This is a research topic on its own. There are several tools that have tried to define metrics like code ownership. There are other approaches which tackle other aspect of collaborative development, for instance the trustability we can have in the code.
There has been also several studies that tried to use the information from bug trackers. For instance, to identify the developer that is the more likely to introduce bugs. But it's hard to be objective (A brilliant developer that is assigned the most critical part of the system, will still be more likely to introduce critical bugs).
It's actually hard to monetize the development tasks. What is the cost of a bug? What is the gain of refactoring? That would be however one way to estimate the contribution of a developer.
The last cool tool I saw of this kind was the Game Plugin for Hudson continuous integration system. A score is assigned to each developer according their actions
-10 if they break the build
-1 for breaking a test
+1 for fixing a test
etc.
That's again a way to somehow assess the contribution of the developer.
All in all, I do feel like what you are asking for exist, but is still very immature.
I don't think you can get a tool to evaluate your share of the project. Measuring lines of source is all very well, but what of the quality of that source? You wouldn't want someone taking the credit for 200 lines of source if the job could have been easiy done in 20...
Also, thinking of my employer for a moment, a lot of people contribute to the project in ways other than code. Immediate examples I can think of would be Project Managers and Testers - both of whom are essential, both of whom rightly deserve some credit.
Martin
The only thing that I could imagine would be a voting system. I have absolutely no idea, if that would work in your team or anywhere - but I'm sure, that you will need humans for any realistic estimation of code quality.
In Stroustrup's Book on C++ I've read once "Don't try to solve social problems with technical means".
Thinking progmatically, the attitude and the ability of a programmer could be very quickly estimated by making a code-review together and having a talk on relevant topics.
Thinking as an IT-enthusiast and as a control-freak, this shouldn't be very hard, to implement a teachable machine-learning software, which uses version-cotrol, bug-database, etc and greates real-time performanced data for each contributor. E.g. R, KNIME or WEKA could be used for this.

How to create an accurate hour estimate? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What are your experiences regarding
project planning and creating hour
estimates for new projects?
What is the approach you are using,
and why has or has it not worked
for you?
Are there any best practices to take
into account?
Estimation Tasks
The principles that I try to use (I don't always get the opportunity) are:
Step-wise refinement
3 point estimates
Risk analysis
Step-wise refinement
When estimating it's important to estimate at the right granularity and to continually break down and add tasks until you're confident in the estimates. Quite often, estimating highlights a lengthy, critical path task that may need more refinement and risk analysis.
Risk analysis
Trying to work out where risks lie in each task (are there lead times for something? is there a lack of knowledge? could a competitor beat you to it? etc. etc.) helps to determine your confidence in the estimates, which allows you to determine how to treat those estimates. Risk analysis also helps to determine if further step-wise refinement is required.
3-point estimates
Specifying the best, probable, and worst case estimates for each task (including design, development, testing, and bug-fixing) helps with risk analysis and planning. The estimates can be used to calculate the most likely duration to hit particular percentage success of that task. Together with information on other related tasks, and risk analysis, a project manager can factor the risk, and other known elements like system testing into the estimates to get a more reliable estimate.
Of course, the granularity of estimates is also important. There's no point in estimating in hours for most tasks. In software, days is usually best, but sometimes it could be weeks or months (such as if you are out-sourcing blocks of work). Choose a time granularity that makes sense for all the tasks in a project (I usually use days for the requirements capture and functional specification phases, and half-days thereafter as I learn more about the tasks and their sub-tasks).
Conclusion
All three of these items feed into each other, so quite often you have to refine each step a number of times. For example, you might have a stab at the requirments stage, then again during functional specification, and again during design specification.
Estimation is a learned skill; the more you do, the better you get. Risk analysis improves as you learn more about what you don't know, 3-point estimates improve as you learn more about what you do know, and stepwise refinement improves as you go through each step of a design process.
If you have the time, revisit your original estimates after you've completed a task and see how the actual time stacks up against your 3-point estimates and your project plan. If it differs, see where the time was lost or gained and try to learn what you can take from that for future projects.
Estimation should not be a daunting task - I always feel like I know more about my work after estimation than before.
I highly recommend the book "Software Estimation: Demystifying the Black Art" by Steve McConnell. It really covers this question well.
There is some excellent information about this in The Pragmatic Programmer. They advise that you use appropriate units of time rather than estimating 130 days estimate 6months. They also advise to concentrate tasks that are most crucial. And avoid making estimates based on sub estimates.
Personally I find it is useful to break the task down to understandable chunks to properly estimate them. If the task is large there are too many nooks & crannies that can hide unthought-of problems. By concentrating on the details of smaller chunks, you can evaluate the potential problems more successfully.
Your question is an NP-Complete problem:) There are many algorithms used to come up with an estimate but they are always just guesses, are never accurate, and many take a long time to execute. Forget hour estimates, use scrum or some other agile framework. Making estimates for a project in hours at its start is simply lying to people.
Don't make hour-based estimates until right before you build the feature and update those estimates continually as you progress on the feature.
Don't forget to include time for testing in your estimates.
Practice, practice, practice. To be safe, overestimate as you refine your estimating abilities. Of course if you're a consultant, this can cost you business. If you're afraid of losing business, under estimate, but be aware you'll be making up the extra hours out of your free time/bottom line.
RE:
If you're afraid of losing business, under estimate, but be aware you'll be making up the extra hours out of your free time/bottom line.
you are better off reducing your hourly rate rather then messing with the hours you present to the client. at least this way, you present the appearance of added value to your client.
LM
Log the time spent in your actual projects and that will help you plan to for the next one, PSP/TSP offer a way to do it

How long does it really take to do something? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.

Resources