Under-priced projects (tight budget) - what are the characteristics? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to determine some of the markers that indicate a project of limited resources.
In my experience a project becomes a ‘limited resources’ project because someone was desperate to sell a solution to a client. The results is a tight budget, features are culled and SDLC processes are cut to a minimum. These short-cuts are taken so the company has some chance of making a profit or even breaking even.
This is a list of things which I have seen go hand-in-hand with a project of limited resources:
Bare minimum amount of time allotted to QA
Strict bureaucratic process for off-spec work
Change request budget may be small or non-existent
Formalised processes get dropped in favour of using time for development
No time available for value-add QA like content checking (e.g. grammar or spelling errors in text).
Can’t do any content management or data entry for client
Have to go for ‘good enough’ coding solutions
No time allowance for hallway usability testing.
no budget for writing user documentation or manuals.
Generally no time for technology research before coding
No time to produce a risk analysis document
A production check-list may be used instead of a project schedule.
The is no time for a programmer to fill their ‘actual’ times vs. estimated times in the project schedule.
Progress updates given to clients may be less frequent or very basic
Less time is available to spend on understanding the clients business domain
Programmers may have to work unpaid overtime.
No time allotted for a project post-mortem
What other sure signs are there for a limited resources project?
===
EDIT
i will try clear up some of the confusion with an example. this is what i mean: the client is given a proposal/quote saying their project will cost $20k. the client then comes back and says "sorry, my budget is $16k maximum". the boss says "make the proposal $16k - we want this work".
so, effectively, you have to do a project with less budget then it should have. there are boundaries where it becomes ridiculous - if the client was to say "my budget is $4k" then you couldnt possibly do it.
and yes, sometimes a tight budget can become so silly that it was a bad business decision to accept the project in the first place (i.e. doomed project).
i understand that there is no such thing as a project with unlimited budget. often business people make the decision whether a project should be undertaken (a business person often isnt a project manager).

What you are talking about is not a 'limited resources' project, but instead a rushed and unplanned project.
A few items in your list I take issue with:
Strict bureaucratic process for off-spec work
Change request budget may be small or non-existent
Actually, these should be the norm for most projects. Who's requesting and paying for the changes, you or the client?
Can’t do any content management or data entry for client
No budget for writing user documentation or manuals.
If that's not part of the contract, why are you doing it?
Have to go for ‘good enough’ coding solutions
At some point, you have to stop at 'good enough', or else you are going to be polishing from now until the end of time.
Something I would add to your list are:
Office supplies become scarce or go under lock and key.
Corporate supplied food/beverages disappear
Down time disappears. 100% of your time on your time sheet must be dedicated to project work.
The printer/photocopier is running full-bore printing other staff member's resumes.
The Boss' door is shut for 90% of the day.

Quite frankly, I've never heard of a project that had unlimited resources. Even the government will put the brakes on something after a while.
So, from a pure logic perspective, all projects have limited resources.

Limited resources? All projects that I know of have limited resources:
time
developers
budget

Outdated or obviously beta documentation which doesn't jive with whatever release of the product on site, or documentation which looks like it's been down through several generations of Xerox copies.
No onsite installation or support. Depending on the size of the system being implemented, a company in good shape might send one or more of their developers to oversee the implementation, whereas a company that's tight might offer phone or email support only in a more "fire and forget" approach.

Continuous occurences of new,
forgotten features, assumed as
"obvious" and "implicit" by the user,
never stated in the requirements,
leading to discussion Bug vs. Change
Request.
Waterfall model adopted instead of an
iterative approach.
Customer protesting on the costs of
fixing, saying something like: "If
you have bugs, it is because you
didn't do your job right", not
accepting the tripple constraint
effects on quality when cutting
time/budget.
Pressure for lower prices for
maintenance and support activities
after project deployment in
production.
Pressure for adopting a fixed-price project (outsourcing) to transfer financial risk together with timeline risk.

"Under-priced projects"
If I understand correctly what you mean, you really are talking about projects where the resources available to the project are not appropriate to achieve the results that were promised to the client.
I can think of four ways for this situation to arise:
Wrong estimates when preparing the project plan
Requirements creep
Reducing the project budget without reducing the project scope
Inadequate resources (staff skills, computer resources, etc.) for the project scope
When people in the project become aware of the situation, they really have two options: cut costs or cut scope. Cutting the scope can be a hard sell and may endanger the project viability, so most of the time people opt for cutting custs, especially since cost can be cut in many ways without atracting the attention from the higher echelons:
Unpaid overtime
Reducing quality
Eliminating documentation
and so forth.
In fact, you may even look good as a project manager when you start cutting costs, since cost containment is one of the project manager's responsibilities! I assume that what you want is to find ways to diagnose an under-funded project. I think that instead of developing an extensive list of symptoms, I would strive to identify a general condition.
In my opinion, there is a general condition that allows to pinpoint an under-funded project. For most projects, staff is the biggest cost - or at least the second biggest cost that can be managed by the project manager. Whenever you find an experienced manager taking measures to reduce staff costs and those measures were not part of the original plan, then you can be sure you have an under-funded project.
Regards,

using the information you guys so kindly contributed i was able to pull it all together and write an article
ill put a link here in case someone in the future is looking for help on the topic:
Surviving An Under-resourced Project
--LM

Related

Adding more structure to our development processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work with a small team (4 developers) writing firmware and software for our custom hardware. I'm looking into better ways to organise the team and better define processes.
Our Current Setup
Developers are generally working on 2-3 projects at a time.
We have projects that work in an iterative sort of way, where a developer is in regular contact with the customer and features are slowly added and bugs fixed.
We also have projects with fixed delivery dates, and with long lead times, final hardware might appear only a few weeks before delivery. The fixed projects are usually small changes to an existing product or implementation and the work is somehow intermingled.
We are also moving from consulting to products, so we are occasionally adding features that we think will add value, at our own cost.
The Issues
We have a weekly meeting where proportions of time are allotted to each project. "Customer A wants to test feature X next week", so the required time is allotted. "Customer B is having issues with Y, could developer P drive down and take a look?", etc.
When we're busy, these plans are very loosely followed. Issues arise and lower priority stuff gets pushed back. Sometimes, priorities are not clear to developers so there is friction when priorities appear to change. The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Developers are generally working on 2-3 projects at a time.
Multitasking is incredibly inefficient. Switching the brain from one task to another requires time for the gears to change over.
When we're busy, these plans are very loosely followed.
Then why create plans at all?
Is it all possible to dedicate just one developer to one task / product / customer? So developer P is the only one who talks to customer B? (Certainly the developer would need to document exactly what he's doing in case he gets hit by a bus, but he should be recording issues and roadmaps anyway.)
The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
If there had been only one developer on project Z anyway, he wouldn't have been distracted by customer A's problems.
Don't think in terms of a pool of developers serving a pool of customers, think of one developer for a given customer. (This can make vacation planning a little tougher, but if you're constantly pulling all-nighters, you aren't spending enough time away from the office anyhow.)
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Aren't we all.
"Customer A wants to test feature X next week", so the required time is allotted.
Allotted by whom?
Do you create your own schedules? If not, the only response to management creating a schedule for you is all-nighters.
Realistic non-all-nighter schedules will bother management. Until you can prove that your customers want a better schedule with fewer all-nighters, there isn't much you can do.
The only way to reduce the all-nighters is to get stuff done sooner. But if the hardware doesn't arrive sooner, there isn't much you can do, is there?
Two thoughts: drive quality and improve estimates.
I work in a small software shop that produces a product. The most significant difference between us and other shops of a similar size I've worked in a is full time QA (now more than one). The value this person should bring on day one is not testing until the tests are written out. We use TestLink. There are several reasons for this approach:
Repeating tests to find regression bugs. You change something, what did it break?
Thinking through how to test functionality ahead of time - this is a cheek-by-jowl activity between developer and QA, and if it doesn't hurt, you're probably doing it wrong.
Having someone else test and validate your code is a Good Idea.
Put some structure around you estimation activity. Reuse a format, be it Excel, MS Project or something else (at least do it digitally). Do so, and you'll start to see patterns repeating around what you do building software. Generally speaking include time in your estimates for thinking about it (a.k.a. design), building, testing (QA), fixing and deployment. Also, read McConnell's book Software Estimation, use anything you think is worthwhile from it, it's a great book.
Poor quality means longer development cycles. Most effective step is QA, short of that unit tests. If it were a web app I'd also suggest something like Selenium, but you're dealing with hardware so not sure what can be done. Improving estimates means the ability to attempt forecasting when things will suck, which may not sound like much, but knowing ahead of time can be cathartic.
I suggest you follow the Scrum Framework. Create a Scrum Environment with an enterprise product. Have product Teams working on the features for their own individual products, which is a part of the combined enterprise product. If you have the resources have a production/issues support and infrastructure Scrum Team. If the issues are coming your way too quickly, have the infrastructure Team try following Kanban or Scrumban.
The Scrum Framework in itself will solve most of your problems if adopted properly.

Assessment of a project manager's volume of work - what is a good methodology? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Currently, my company utilizes agile as its development principal. I was approached by my boss to determine some methodology for determining the amount of work a project manger does on a given project in flight. To be honest, I can't really think of anything fool proof.
I guess the best question is how do we assess how busy, on a day to day basis, a project manager is?
Remember that ANY metrics you can come up with is most likely going to be gamed.
[ Do I get a badge for on-topic link to Joel On Software? :) ]
Having said that, you can try a union of the following approaches:
Developer feedback!!! (e.g. a good PM's feedback would be "I had problems X, Y and Z and he made them disappear"). Not so good for measuring how "busy" a PM is but really good for measuring how effective he/she is.
Volume and rated clarity of project plans (easily gamed)
Rate of change of project plans (easily gamed)
Amount of meetings/meeting time (easily gamed)
Success rates of projects (on timeliness vs. % of features delivered vs. customer satisfaction). Not easily gamed but devil's own work to normalize this across projects.
Timesheets will measure the amount of work in one sense (you can see how their day breaks down and so on) but not I think in the sense you want.
Ultimately I don't believe there is a useful metric for Project Managers in this sense, but I don't think that's an issue.
I think ultimately you should measure project success rather than "busy-ness". After all, why do you care how busy the PM is if they deliver successful projects?
One PM may spend half a day putting together a risk log and mitigation plan which contains 20 risks, another may spend 2 days putting together one which only has 5 risks but none of those numbers are any more useful as a metric than lines of code. The key thing is not how long you spent doing it, how many risks you identified, how big your mitigation plans are, but whether you actually managed risk on the project successfully.
You're better off looking at what a Project Manager is meant to do, which is to deliver projects on-time, to budget and to customer satisfaction (which I'd use as the ultimate measure of quality rather than defects).
After all, do you measure how "busy" the CEO is? Or is he just judged on the profit the company makes?
To do this:
Time - The only way it can really be gamed is by massively padding estimates and plans and this can be minimised by reviewing the plans and estimates and having all relevant parties agree them (developers, PM, client). The other side of this is that the PM must agree to the plan rather than have the implementation date foisted on him or her. You might want to measure this on either the overall implementation or each milestone.
Budget - Measurable but gameble. For most development projects the key thing her is honest timesheets from the developers and the best way to ensure this is to make it so the PM is the PM but not their line manager. That way the developers have someone to fight their corner (a technical director for instance) if they're being pressured to fill in timesheets to keep the budget down. Again the PM should agree the budget, it's not reasonable to expect him to deliver on something he's told you is unreasonable.
Customer satisfaction - Hard to measure so I'd suggest that you keep it simple and go with a straight forward post project review with the account manager and marks out of 10 for communication, delivery and whatever else is important. It is subjective but ultimately so is customer satisfaction.
But a lot of it depends on the company culture. For some organisations the key thing will be billable hours, others developer satisfaction will be part of the mix.
I am trying to understand WHY you have been asked to estimate the amount of work that a project manager does on a project.
At best it is just a request for a rule of thumb, otherwise it indicates that your boss just don't know the first thing about running a project. Even when projects looks very similar there will always be something unique about a project:
The team is not identical (teaching
the new guy the ropes takes time)
The spec might vary just a tiny bit
(and that tiny bit might double the
workload)
Even the season might influence the
outcome
and so on and so forth
Each and every condition on the project might change the workload of the project manager, so it will always be a subjective assessment.
I would suggest you use the same Burn Down and Level of Effort that you use for the developers. A PM's task in an Agile environment is a bit different (and from shop to shop it's different), but the PM should be able to provide a list of tasks, etc. I'm thinking positive and seeing it as your bosses approach to determining how much availability the PM has.
Most project managers equate responsibility with status, so a project manager who has spare capacity is quite likely to volunteer to take on a new responsibility, because it's in his/her own best interest. In all but the most functional organizations it's often better to be visibly overloaded, for that heroic look.
It's more likely to be in the organization's best interest to slightly under load its project managers, so that there is some spare capacity available should something start to go wrong. A project manager might well choose to apply his/her spare capacity in some constructive way in any case. Excessive politicking or other unconstructive activity is a good indicator of someone who could be more constructively deployed. Even on agile projects, workload tends to be uneven across a project cycle - e.g. delivery is often a management-intensive activity - somebody who is continuously heavily loaded probably has too much to do, and may be ignoring or hiding a serious problem.
If the next level of management conducts regular project reviews, pays attention to how many problems are being escalated, whether the project reports correlate with the news from the grapevine, and does some basic estimation on workload projections for each project manager, then the organization should be able to run a reasonably efficient system.
Managers tend to be political and psychological animals. Any methodology that doesn't take that into account is ignoring reality, so a good methodology for this problem is likely to be based more on observed behaviors than on hard numbers.
Excuse me if I am being to purist, but the tag and the question calls for Agile. What would be a project manager in Agile? You might either be trying to asses the work being done by a product owner or a scrum master?
In any case, both roles perform several tasks that are hard to measure, so probably your boss is looking at the wrong picture.
For instance, a scrum master is "The person responsible for supporting the development team, clearing organizational roadblocks, and keeping the agile process consistent". Basically is a coach and a facilitator. Blocking disruptive requests or distractions created by higher levels of management by negotiation or persuasion to follow scrum practices is one of the skills commonly used by scrum masters. Several of these soft skills are hard to measure as "work" since they do not involve working on a computer or producing a report.
I think a the metric that your boss would benefit most from is more related on how effective the team is and how a scrum master is described to facilitate the work of his team-mates. DVK then has a very valid point, the metrics you create can be "gamed", so it is best to trust that your managers are busy if your projects are progressing and your teams are happy and work as a team.

Why do many software projects fail today? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
As long as there are software projects, the world is wondering why they fail so often.
I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years.
You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved".
EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time.
In my opinion the 10% + / - is a good range for an offer / tender.
EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams?
When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...)
I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"?
EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement.
Clean Code Developers live principles
like SOLID every day. A professional
developer is the biggest reviewer of
his own work. And he has an internal
value system which helps him to improve and become better.
Check it out on: Clean Code Developer
EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.
Bad management.
Projects are not successes or failures based on some underlying feature of the project, but on whether they fulfill the needs of the users. (They can fail altogether, in which case there was a gross misstatement of what was possible.) It is mostly in the process of evaluating the feasibility and cost-benefit ratio of the project, and establishing goals, that software projects tend to fail or succeed.
There's a disconnect between people who deal with facts and things (like programmers) and people who deal with other people (like sales types and managers). To a programmer, the facts are the facts, and have to be dealt with. To a sales person, the facts are what other people think, and are changeable.
There's also differences between tangible and intangible facts. Nobody thinks that workers could build a large bridge in a month if they were really motivated; they can see all the steel and concrete and other stuff that has to be moved and fixed into position. Software is much less tangible, and lacks the physical restrictions: while it is not even theoretically possible to build the bridge within a month, it is conceivable that a team could create a large project within a month, as "all" they have to do is get everything right the first time. It is physically possible to type thousands of lines of code a day; it's just that the chance that they're usable as is is so close to zero it doesn't matter. The actual productivity of a top developer is actually pretty unimpressive in word count, compared to (say) the productivity of a journalist.
Therefore, those who are used to flexible facts don't have the imposing physical limits to remind them that things can be pushed only so far, no appreciation for what programming actually requires, and no good feel for how much productivity is realistically possible. Moreover, they know how to get their way in negotiations, much more than the average developer, so in negotiations about what's possible they tend to assume more than they can, in the real world, get.
In the meantime, software development is inherently fuzzy, because producing the physical product is trivial. I can produce a copy of software quickly and cheaply, once it's been developed. Software development is design work, pure and simple. Anything corresponding to manufacturing is ruthlessly eliminated with such things as compilers and wizards and code generation. The developer, faced with the manager who wants the impossible, finds it hard to say the impossible is actually impossible, because there's no way to say it's actually impossible. Given facts that are unknown enough to feel flexible, the person with strong negotiating skills and determination will typically get the answer he or she wants.
Given this disconnect, one might ask whose responsibility it is to bridge it. The answer is, in my opinion, clear. The responsibility for understanding how different people think belongs to the people who specialize in dealing with other people. The responsibility for coordinating different types of people belongs to the people whose job it is to coordinate these things. Therefore, managers.
Managers who do understand software development and developers, and can deal well with other managers, will do well, and their projects will generally succeed. There are still far too many of the other type in the world.
Not a direct answer, but I found the Virtual Case File to be a fascinating case study on how a big government-backed well-funded project can still tank.
You can also add your top reason why a
software project fails.
Another IEEE Spectrum Online article "Why Software Fails" examines this very question. It summarizes the major points as follows:
Unrealistic or unarticulated project goals
Inaccurate estimates of needed resources
Badly defined system requirements
Poor reporting of the project's status
Unmanaged risks
Poor communication among customers, developers, and users
Use of immature technology
Inability to handle the project's complexity
Sloppy development practices
Poor project management
Stakeholder politics
Commercial pressures
Poor planning.
Hofstadter's Law
It always takes longer than you expect, even when you take Hofstadter's Law into account.
Mismanagement.
SW project get started by throwing developers against a perceived problem. Business requirements crystallize as the project progresses. New functionality gets added while deadlines stay put. More developers are thrown in. Original project members quit or get fired. By this point too much time, money and resources is invested in the project so it cannot be canceled. As the deadline passes the project is declared finished and successful despite the obvious lack of finished product.
Come to think of it - I've jet to see a SW project fail...
Honestly, I think its because most programmers are not very good at what they do(and I don't mean just cranking out code). People on stackoverflow are probably the exception. I don't know about the rest of you but as a consultant/contract programmer I have worked in or around many places, and the ratio of mediocre or poor programmers to good ones is about 10 to 1.
One of my strengths has always been estimating accurately and then delivering on time and on or under budget - I always aim for coming in 10% under cost and on time. Then I like to tell my client that because I got things done for less $$ than expected, which of the "extras" would you like to add in?
Even a perfectly functioning product that is late and/or over budget will be considered a failure by many business managers. Programmers often focus on just the technical aspects of what they do, with little regard for the cost or deadline. You really need to do all three well for it to be deemed successful project. There are many other programmers that could code circles around me without a doubt, but for the person paying for the project, that is rarely enough.
It is because no-one seems to read anymore.
Every single reason why projects fails has been analyzed time and time again.
You only have to read three books to know why 80% of projects fail:
The Deadline: A Novel About Project Management (Tom Demarco, published 1997)
It's a great introduction and it's pretty entertaining.
Peopleware : Productive Projects and Teams (Tom Demarco, published 1987)
The Mythical Man-Month: Essays on Software Engineering (Fred Brooks, published 1975)
We as a profession simply seem to forget everything every 3-5 years (see "centralised computing is inefficient; let the clients handle it" vs cloud computing).
(From a programmers point of view - I'm not a project managemer, but I've often been involved in the process).
A number of people have mentioned that bad programmers are endemic. But I think this is true in another sense as well - we're all bad programmers in that we find it difficult to anticipate complexity, an unavoidable issue that 50 years of magic bullet estimation and planning schemes have failed to solve.
Anticipating the side effects of large projects gets exponentially more difficult as projects grow. This is a dull truism, for sure, but for me it means that on any project I've worked on where I've been involved in the estimating process I've run into some case where there's an unanticipated consequence of a design decision that causes everything to come to a grinding halt, or at least a few days of bugfixing - just something that nobody foresaw, not any sort of malpractice or stupidity. It's just the nature of a complex enough system.
Aside from the built-in uncertainty, there's also a tendency to underestimate things whose outline is known, because the fact that they have less uncertainty makes them seem simpler to implement.
So the uncertain stuff gets magnified, the clear stuff gets minimized, and what really kills you is the thing that you didn't think would be uncertain.
The number one reason: a failure of project management.
A PM's raison d'etre is to make a project succeed, ergo a project failure is their failure. Certainly there are factors beyond their control, but it's still within the PM's job description to manage that risk, and the only get out clauses should be someone higher up the food chain taking decision control (which is a terrible thing to do to a PM) or acts of god.
In my experience failures mostly occur when PM work has been fast and loose or non-existant, including when decisions start to flow from sales people and when the client starts decreeing change control. A good PM is priceless.
Failure is a judgement -- more of an accusation, really.
"The project was more than 10% over budget and time."
Which budget? Which schedule?
6 months ago, I wrote a plan saying it would take 6 months.
3 months ago, the users asked for more stuff. I gave them a plan that said it would take 9 more months.
Last month I was told that the project was 6 months over budget and therefore a "failure".
But wait. It delivered what the users wanted. It was over the "original" estimate. It was under the revised estimate. The users want more. IT wants less.
I'll approach it from a different aspect than most the rest here.
I've noticed a project slowly fail over a period of time. Sure, it's gotten better in that time too--but it still isn't profitable. In this market profitability, and being in the black, means success.
Why is it failing? I think it's simple: you get what you pay for.
Software is like a bank account, not primordial ooze. If you don't put resources into it (time, money, focus, effort) then you won't get anything out of it except failure and cost. So you must invest things into your project, and sometimes the earliest work sets the stage for years to come. You can't throw mud at your computer and expect a new mouse in two years and $10 million dollars later, so likewise there must be effort expended.
One of the biggest problems today are "budget developers" in a third-world country. I don't begrudge them their part of the market, but for a well-funded Silicon Valley startup to seek them out and get a budget foundation (framework or even prototype) is to make a poor investment in the future. This very same budget framework is what is causing my friends so much of a hassle today. It works now; it worked when it was written, but it wasn't written well and few even take the time to maintain it. Were the company to stop and rewrite the software the way it should have been written in the first place they wouldn't have all this trouble. Can they afford the time? Nope. They have to make it profitable before they can even thing of it.
As the saying goes, "I can make it: cheap, fast or good. Now, pick any two of those." Everyone wants all three, myself included. But if you don't invest the time, planning, and work required to make your project a success from the start ... then don't expect anything you can be proud of later. It'll feel like a forged Mona Lisa where you, and every other engineer like you, can see a defect here and there that shouldn't have been there from the start.
So:
Don't undertake what you cannot afford in: time, money, effort, focus, etc.
Don't skip planning!
Don't be afraid to rewrite early when it counts the most. (Later it'll be worse than a trip to the dentist, believe me.)
Don't underestimate the power of bureaucracy to prevent you from doing it right.
And don't be cheap where you should spend the most of your time. It will cost you later, guaranteed. And if not you, then someone else will take the bullet for you.
One common mistake is that sales people and technical people do not communicate sufficiently. So the salespeople sell things that are technically not feasable within budget. (And then they run with their bonus :) )
Being over budget and time is not a good definition of failure (and actually being in budget and time doesn't always mean success). Take the following examples provided by Hugh Woodward, PMP, in Expert Project Management - Project Success: Looking Beyond Traditional Project Metrics:
Sydney Opera House: With its graceful sails dominating Sydney Harbor, the Sydney Opera House is arguably one of the most recognized buildings in the world. Yet, from a project management perspective, it was a spectacular failure. When construction started in 1959, it was estimated to cost $7 million, and take four years to build. It was finally completed in 1973 for over $100 million.
[...]
Project Orion: This massive effort to develop Kodak's new Advantix photographic system was reputedly very well managed from a project management perspective. PMI recognized it as the 1997 International Project of the Year, and Business Week selected the system as one of the best new products of 1996 (Adams, 1998). But Kodak's stock price has fallen 67% since the introduction of the Advantix system, in part because it failed to anticipate the accelerating switch to digital photography.
Corporate Intranet: Finch describes a project that involved the implementation of a corporate intranet to globalize and improve communications. From a traditional project perspective, it failed to meet its success criteria, but not significantly. It was one month late and believed to have been accomplished with a small budget overrun. But both the project manager and senior management viewed the project as successful. The hardware and software had been installed successfully with a minimum of disruption, thereby providing all staff members with access to the corporate intranet. Following implementation, however, employees made only limited use of the intranet facilities. The main objective of the project was therefore not achieved. In this case, both the project manager and senior management focused on an objective that was too narrow.
[...]
Manufacturing Plant Optimization: A paper manufacturing company with five plants across North America decided to increase its manufacturing capacity by embarking on a de-bottlenecking program. A project team was formed to install the necessary equipment, and charged with completing the work in 18 months at a cost of $26 million. But almost immediately, the project team was asked to defer major expenditures until an unrelated cash flow problem was resolved. Rather than stop work completely, the team adopted a strategy of prototyping the technologies on which the de-bottlenecking program was based, and actually developed some cheaper and more effective solutions. Even when the project was authorized to proceed, the team continued this same approach. The project eventually spanned five years, but the resulting capacity increase was three times the initial commitment. Not surprisingly, the company immediately appropriated another $40 million to continue the program.
[...]
So in these examples, which ones were truly successful? Examples like the 2002 Winter Olympics and the Batu Hijau Copper Concentrator would suggest that these are truly successful because they not only met the traditional project managers' definition of success, but also met the projects sponsors' perception of success.
As we start to look at the examples
like Project Orion, the Corporate
Intranet and the Laptop Upgrade, we
notice that the traditional metrics
start to fail. These projects are
considered successes in project
managers' definition of success, but
failed at meeting the sponsors'
success criteria. The project Orion
example is quite astounding as this
project was recognized by PMI (Project
Management International) in 1997 as
the International project of the year.
Yet it did not increase Kodak's
revenue, because they did not foresee
the adoption of digital cameras.
Most interesting are the examples of
the Manufacturing Plant Optimization
and the Sydney Opera House. They both
failed to meet the traditional project
managers' success metrics but were in
fact considered successes. This is
particularly shocking when you see
that the Sydney Opera House had a
"cost overrun of 1300%" and a
"schedule overrun of 250%".
Once we realize that projects can fail
to meet the traditional metrics of
success, but still be successful to
the stakeholders, this creates a
quandary for the project manager. How
does one really define success? Is it
possible that a "Challenged" project
could be canceled that would have met
the sponsors' needs? Is it also
possible to identify a project that
should be canceled that is currently
on time, on budget and meeting the
defined needs?
What do you think of that conclusion? How do you really define success?
My answer is rather unusual from the rest of this, but quite common around here: killer bugs. I had a project go an extra two weeks (50% extra time) because of one switch in source that I didn't know about until I dug through the source code (there was nothing documented in help or on the web).
People/companies do not proudly shout about their failures, so many cases wont be ever heard.
Poor use of practices and software development methods. In my experience, one of the big reasons a project failed its that the development team use a wrong method to face the software development process. Choosing a methodology without having a good understanding of how it works and what it takes, can bring a time consuming issues to a project, like poor planning.
Also a common problem is also the use of technologies without a previous evaluation of it to understand how it can be applied, and if it brings any value to the project.
There have been some good studies done on this. I recommend this link from the Construx website (Steve McConnells company).
The Construx link above is real good!
Many projects are managed on a rosy picture of reality. Managers tend to power talk developers into optimistic estimates. But say a 20 week projects gets "talked down" to 10 weeks. The requirements phase will now be 1 week instead of 2. After 1 week, the requirements aren't finished, but the project moves on. Now you're working on shaky ground-- and on a stretched schedule!
This can be funny. Once there was this old guy in a room opposite mine. His job title was system adminstrator. But the system he was supposed to adminsiter wasn't there. It had never been finished, although management thought it had been. The guy played games for about a year before he got bored and moved on.
The funniest part? Management put up a new job opening after he left!
IT Project Failures is a blog about project failures that may have a few answers here if one wants to read about it.
Myself, I think a large part of this lands on the question of being able to state exactly what is expected in x months at $y when really the answer is much more open-ended. For example, if a company is replacing an ERP or CRM system, there is a good chance that one isn't going to get all the requirements exactly right and thus there will be some changes, bug fixes and extras that come from taking on such a large project. For an analogy consider how many students entering high school or university could state their precise schedule for all 4 years without taking any classes and actually stick to that in the end. My guess would be very few do that as some people change majors or courses they wanted to take aren't offered or other things happen that change what the expected result is but how is this captured in a project plan that we started here and while we thought we were going there, we ended up way over in spot number three.
The last statistic that I heard was that 90% of projects are either over time or over budget. So if you consider that failing, quit a bit.
The reason why it fails mainly lies on process. We as software engineers don't do a good job of gather requirements, and controlling the customer. Because building software is a "mysterious" job to people outside of IT, it is difficult for them to understand why last minute changes are difficult. It's not like building a house and clearly showing them why it isn't possible to quickly add another door to the backside of the house made of brick.
Not only software projects go over budget, or take more than scheduled time to complete. Just open the newspaper and look at public projects like bridges.
But for a software project it is far more easy to cancel everything. If a bridge or building is half finished, there is no way back. Half the infrastructure is in place. It is very visible and it takes money to remove it.
For a software project you can press Shift-Delete and nobody notices.
Its just a very hard job to do an accurate cost analysis.
Using the FBI's Virtual Case File system it comes down to poor program management. The program had pre-9/11 requirements and post-9/11 expectations. Had government management done their job there would have been contract mods.
http://government.zdnet.com/?p=2518
"Because of an open-ended contract with few safeguards, SAIC reaped more than $100 million as the project became bigger and more complicated, even though its software never worked properly. The company continued to meet the bureau’s requests, accepting payments despite clear signs that the FBI’s approach to the project was badly flawed, according to people who were involved in the project or later reviewed it for the government."
Although $105,000,000 for 700,000 lines of code comes to $150 per line of code. Not too bad.
To truly gage whether a project is really successful, the original estimate / budget is probably a poor yardstick. Most engineers tend to underestimate how much time it will take because of political pressure, and they don't know how to estimate anyway. Many managers are poor planners because they want unrealistic deadlines to please their boss, they often don't understand what's involved, and plus it's something they can look at and understand and use as a stick in meetings, as opposed to actually helping solve problems. Practically, it helps businesses make rough projections of expense for budgeting purposes, at least it gives them something the go by.
A better measurement of project success would be - are the users happy? Does it help the business make money? How fast will the money gained help recover the cost of the system? These are harder to gauge than a simple plan, though.
In the end, I've find you're better off delivering on deadline, even if it means working some overtime. It sucks, but that is the reality of it.
As stated previously the vast majority of people involved in software development to not actually understand how
ask the right questions to learn about the problem
appreciate the users goal and judge expectations
understand technology available and the related Eco structure
most of team directly/indirectly involved are poorly skilled.
do not appreciate or know when they are wrong do they can take action.
Even with perfect requirements and related definitions too many developers don't know what they are doing.
Just look at the types of questions asked here. Would you go to a doctor that asked the same equivalent question. The scarey thing is that they ask and don't know how to learn or reason.
Different agendas
Management do not really understand what a developer does, how they produce code and the difficulties encountered. All they care about is the final product delivered on time. But for a developer they care about the technical aspects, well produced code in a solution they our proud of.
Deadlines
I've often heard developers say they wish they could produce better code, and that deadlines often push them into producing something that just works, rather than good code. When these deadlines are unreallistic the problems are exacerbated.
I think this thread managed to gather the biggest group of tallented unhappy software developers, engineers, project managers, etc. that it is possible to gather in one place.
I share point of view with most of the posts stuck here and I think they come out of a lot of suffering through seeing co-workers not doing a good job when job (programming) and success is the most important part of our lives.
http://www.clean-code-developer.de/
They have an incredible cause! and their philosophy, if taken ahead, could manage to create a new layer of heroes, as the main stream of developers and IT professionals is so ROT these days.
I'm working on something similar here in Brazil, 'cos I love our profession of bringing software alive both as PM and software developer (.NET) and I can't stand people who face programming as nothing else but they way out to make big money with almost no effort.
... sure I don't consider going overnight in front of the computer geniality. but the few who matter, have done it more than once.

Is there such a thing as a process smell? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We're generally familiar with code smells here, but just as damaging if not more so are when the business side of things - as much as it falls within our domain - is going wrong.
As examples, the inverse of anything on the Joel test would be considered a major process smell (i.e. no source control, no testers) but those are obvious ones and the point of "smells" is that they're subtle and build into something destructive. I'm looking for granularity here.
To start off with here's a couple (which can be turned into a list as the answers come in)
Writing code before you have a signed contract with the client
Being asked for on-the-fly estimates ("just a rough one will do") for anything which will take more than a day (a few hours?)
Ancient cargo-cult wisdom prevails (personal example - VisStudio sourcesafe integration is banned)
You've stopped having non-project specific group meetings (or lack any similar forum for discussion)
So what are some other process smells, and just how bad are they?
The book "Antipatterns" by William J. Brown et. al. has a bunch of project-related smells. They aren't always disasters in progress; mitigating circumstances exist for just about any smell.
The Portland Pattern Repository also has a page on Antipatterns, covering many of the same topics as the "Antipatterns" book. Visit http://c2.com/cgi/wiki?AntiPatternsCatalog and scroll down to "Management Antipatterns." A few examples:
Analysis Paralysis - a team of otherwise intelligent and well-meaning analysts enter into a phase of analysis that only ends when the project is cancelled.
Give Me Estimates Now - a client (or PointyHairedBoss) demands estimates before you have enough data to deliver it. You accept the "challenge" and give out of the head estimates (i.e. guesses). The client/boss then treats the estimate as an iron-clad commitment.
Ground Hog Day Project - meetings are held which seem to discuss the same things over and over and over again. At the end of said meetings, decisions are made that "something must be done."
Design By Committee - Given a political environment in which no one person has enough clout to present a design for a system and get it approved, how do you get a design done? Put together a big committee to solve the problem. Let them battle it out amongst themselves and finally take whatever comes out the end.
Collect them all! :-)
Back Dating - being given an end date and then told what needs to get done
Inverse QA Coverage - QA focuses on the non-essential items (because that's all they know how to test)
Environmental Alignment Issues - the various environments (Dev, Test, Staging, Production) are not in sync for code and data - therefore any testing prior to production is invalid
Delivery Date Detachment - no one believes in the end date because: it was made up to begin with and 100% of prior projects never met their delivery dates
Old Grumpy Code - old code is feared because there's no desire to refactor
the evil pm triangle (scope, cost, resources and/or quality) - to adjust the project you need to add people, reduce quality, reduce scope, etc....once a project is in motion, most changes (even reduction in scope) will increase time and cost and reduce quality..once the train tracks are down, it's tough just hanging a left turn
One smell I have a real problem with (because I work with it): Not ditching tools, dev software, methodologies, or anything else that doesn't work.
Many times, there is one (or more than one) piece of software that clearly, blatantly, doesn't work and likely interferes with the development process, but which a project manager simply refuses to replace/upgrade "because it would cost too much {time, money, whatever} to replace."
Edit: This also extends to machines and other infrastructure too (examples: a build server that takes an hour to do a two-minute build, or a version control system - ahem CVS - that takes 15 minutes to find out whether there have been any updates on a 50MB source tree).
Shipping a prototype - "we'll productize it later"
I suggesting checking out the Organizational section of the wikipedia page on Anti-Patterns. The I've had to deal with are 'Crisis mode' and the 'on-the-fly estimates' you mentioned.
You haven't had a post project review....when the project ended 6 months ago.
Some smells I have seen:
Optimistic management, but they can't pay your salary this month. This is real bad. I left the company in time but it died a few months later.
Extreme fanatic team building sessions. Focussing on how great the company is. But in the end it all goes down.
Good new people are laid of because they tried to change the process. Real shame. I have seen some people that really tried to improve the company, but old habits never dies so it often ends in a big desillusion.
The boss is always right mentality...
There is more but I won't spoil the fun for others.
Changes to process are made with no thought to timing or current deliverables, then immediately reversed when deliverables turn up late due to instituting new process.
Someone goes on medical leave and the team as a result is behind trying to pick up that person's work as well as their own and when the managers or clients or client sales reps are told things will be delayed as a result, they are only concerned about when things will happen and can you work nights and weekends in the meantime and never even ask about the person with the emergency and how he or she is doing.
When overtime for low level people is expected but the people who want this urgently leave on time and are not availble to answer questions. Or when they make you work overtime to be ready by 8 am and then don't look at it on QA for three more days. Hello I could have done it by then in regular hours.
Delivery of needed files (for data import for example) or information minutes before the due date and then blaming developers when due date is not met.
What I call: NIH (Process edition), a.k.a. Choose your own adventure.
Evidence of this:
you spend endless meetings debugging the process. And refactoring it.
nothing really gets done, because no one knows what they should be doing.
I guess this is an antipattern, rather than a smell.
Interesting question and even more interesting answers. Thanks for those.
I have been in almost all roles of software development (Developer, QA, Tech Lead, Project Manager - even client) and I can safely list the following smells
How quickly does the team react to new inputs (and how accepting are they of change)
How many layers does it pass through to get things done (beaurucracy)
How clear are the features/tasking - and are the goals SMART and do we have any KPIs.
How serious is the team working on the project about it
Is the team meeting regularly (read daily) to discuss achievements, goals and issues.
Most important, however, and the most evident (to a good nose) is the hygiene level of the project management tool being used (excel sheet, piece of paper, agile tools, email, whatever in whichever methodology you use). That is the first thing I notice while evaluating projects.
Do I know where the project stands at the moment?
Can I tell (Without asking the team) what needs to be done next?
Can I tell what the team is working on right now?
Can I tell when the next release is and if its achieveable?
Can I tell if client is getting regular updates?
Can I tell if client is giving approvals and if his feedback is taken care of in time?
Can I tell just from looking the load distribution of the project on the engineers?
Obviously, all this is well covered if you pick any modern Agile methodology, but depending on the market and kind of work, the mileage may vary. So keeping myself methodology agnostic, this is the bare minimum set of smells that should be rid of.

I need this baby in a month - send me nine women!

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
The proposed individuals to be added to the project must have:
At least a reasonable understanding of the problem domain of the project
Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
Have good communication skills
Be highly motivated (e.g. be able to work independently without prodding)
The existing team members must have:
Excellent communication skills
Excellent time management skills
The project lead/management must have:
Good prioritization and resource allocation abilities
A high level of respect from the existing team members
Excellent communication skills
The project must have:
A good, completed, and documented software design specification
Good documentation of things already implemented
A modular design to allow clear chunks of responsibility to be carved out
Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
Were late before you started (more
stuff than time) and/or
slipped 1hr, 1day at time.
Hope that helps!
It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
Maybe if the following conditions apply:
The new programmers already understand the project and don't need any ramp-up time.
The new programmers already are proficient with the development environment.
No adminstrative time is needed to add the developers to the team.
Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
If the new people focus on testing
If you can isolate independent features that don't create new dependencies
If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
I suppose the adding people toward the end of the work could speed things up if:
The work can be done in parallel.
The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
Self-contained modules that have yet to be started
Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.

Resources