What steps make up your web development process and how much time does each phase take? - project-management

Let's say you work 100 days on a project. How many days would each phase of your process (requirements analysis, specification, etc.) take?
I'm interested also in the ratio of specific activities in every phase, such as writing tests, back-end coding, front-end coding, visual design, database design etc.
Many thanks!
EDIT:
Just to make things clear, I'm not talking about web site design - I'm interested in more "serious" web development, such as custom business web applications. I know, everything depends on the specifics of each project, however I suppose the ratios could be roughly the same from project to project.
EDIT2:
As Helen correctly remarked, this question is really hard to answer, since projects can be so different and so can be teams. To make it more specific, let's say you have a team of four developers - two of them for back-end work, one for front-end programming and one for design & html/css coding (one member of the team acts as a project manager) and you are supposed to develop StackOverflow.com site.

We're running agile scrum projects, so we typically run all these activities in parallel. So while I cannot answer your exact question, I can give you some ideas of the ratios we have found to be effective:
4-5 developers can be served by one client side programmer (html/css), one on-team tester and one interaction designer (works with the customer to design wireframes). A team like this typically needs a 50% graphic designer for most applications, but your mileage may vary there. Then there's project manager, and there's all sorts of other stakeholders that are not part of the core development team.
In the development team you normally have a couple of developers who are sharp on client side development and similar amount on the back-end. These staffings also tend to reflect resource usage ;) Testing is an integral part of development as well as the efforts of the on-team tester.
Your local conditions may of course vary, but these numbers are just to give you some idea.

Step 1: denial
Step 2: anger
Step 3: acceptence
The time each step takes is different for all team members involved.

I agree with everyone who started along the lines of, "It depends on the project".
On the other hand, I do think that there's a consistent process that can be followed; only tweaking the percentages of effort to match the project:
Typically, I follow these basic principles:
Discovery - determine the feature/functionality of the system. The easiest (and worst) thing to do is accept what's being asked for and go with it.
For example, "building stackoverflow.com" is a pretty broad request - and is actually the wrong request. The project has to start with "I need an online location where programmers can collaborate".
Based on that one thing you're trying to solve, you can drill down into all the details you want - like how a question will be answered, asked, rated, etc.
I think this is the most crucial step! output = requirements/specification; 20/100 days can safely be spent here
Wireframing - this is where I like to use basic HTML pages, paint.NET, or even construction paper and glue to mock up every aspect of the final site's functionality. I like using paper because it's easy to make changes :)
Going through this process forces you to consider just about every aspect of the user experience and gives you the flexibility to add/remove features and adjust your requirements as needed. Your customer has some input to changes before you've committed a bunch of time to writing code.
An added bonus is that you get to use paste :)
10/100 days
Implementation/Testing - I group implementation AND testing together because I think it's short sighted to develop a whole site without testing along the way. (At the same time, you still need step 4). This is the part where the rubber hits the road. If you've handled your client properly in steps 1 and 2, you'll be pleasantly writing your code without any last-minute changes in scope (or at least very few). I try to follow a general set of steps for implementation:
data develpment (db design, query design, sample data setup)
site framework (set up your environment(s); production, dev, and qa)
front-end structure (css, standard classes, standard html structures)
start coding!
55/100 days
SQA - hopefully you can get some non-involved parties/end users to test the app out as you go. Test plans need to be developed to ensure that it's clear what should be test and the desired outcomes. I like using real people for testing the front end; automated tools are fine for code/backend modules
This is a good time to let the client see things progressing - they should have very limited ability to make changes at this point.
10/100 days
Delivery/Post Production honeymoon - you've built it, tested it, and you're ready to deploy. Get the code out there and let the client play. You shouldn't have much to tweak; but I'm sure there will be some adjustments.
5/100 days
Some of this seems idealistic; but you'd be surprised how quickly you can ship your application when you've got a well-reviewed, well-created specification.

It is impossible to give a meaningful answer to this question. The ratios will not be even roughly the same from project to project. For some projects the visual design barely matters (as long as it more or less works) but the database is critical and complex. For others, it's all about providing a smooth user experience with lots of AJAX goodies and other eye candy, but the underlying data is trivially simple to organise and store.
It sounds like you're thinking mainly of one-man projects, but for larger teams the size and setup of the team also matters, as well as your development process.

Probably we are an unusual development shop. Our whole existence (at least during work hours) is requirements gathering. Developers are required to als work in every other department. Be it answering the phone in after sales support (and fighting the CRM Software), driving a forklift in the warehouse (and fighting the mobile terminals) or packing crates in the shipping station (and fighting confusing delivery notes).
When we tackle a new project "requirements gathering" was usually an afternoon on the whiteboard, usually with somebody from the department which used the new software most. There was little upfront design and lots of re-factoring and rewriting. We where very happy with this and generated about 100.000 Lines of Code which are well-architected and stable.
But it seems we are hitting a complexity barrier now. This is very frustrating because moving to "heavier" processes than hack and slay coding results in a dramatic loss of productivity.

Just to be clear - you're basically time-boxing your work - which is directly relational to having a fixed budget (4 developers x $x per day x 100 days - assuming it's 100 day duration and not 100 day work effort). If that's the case then, on avg. you would spend:
25% up front planning which includes scope, spec development, technology approach, logistics (computers, servers, work space), resource gathering.
50 % development - test case (TDD) development, schema design and implementation, front end coding, backend coding, deployment
15% Testing - basic break/fix activities
10% overhead/management - project management, communication and coordination.
Very rough est. - many 'areas' to consider including resource skills/maturity, technology being used, location of resources (one room or across the country), level of requirements, etc. The use of 'skill specific' resources would make planning more difficult since you might need the resources to perform multi-roles - one suggestion would be to get 3 generalists who can help spec/design/plan and one tech wizard that would ensure the platform and database are setup correctly (key to success once you have good as possible requirements)

That is truely a tricky questions. To give an somewhat exact estimation on the ratio of time you need to apply for each step - if we take a classical approach of design, implement, test and deploy - one needs to know the specification and the expertise of the project members.
If you take McConnell's book "Software Estimation" (which I highly recommend) you have a chapter in their about historical data and how to use that for future projects.
I do not think, that you have exact historical data of former projects - well - I don't have one - although I always remind me of recording them ;)
Since the smallest failures or uncertainties in the design-phase are the most crucial ones take a lot of time to specify what you want to do. Make sure that everyone understands it the same way and write it down.
To cut a long story short - I'd put 50% - 75% of the time in the design (if 75% this would include a prototype to clear all uncertainties) and equal parts in implementation and test.
If you are using TDD you would mix design and test a bit so you would take a bit of the design-phase and add it to the test-phase.

Building a list of client needs 1-2 days
This depends on the client and what they need and how well prepared they are.
Designers do initial sketch ups 2-3 days
A bit of branching happens here as 2 and 3 will happen concurrently.
Programers build any functionality not already in our existing system 1day - 1 month
This depends on the client, and what they need more then most anything else.
This also will only produce functional code.
Repeat steps 2&3 until the client is happy with the general feeling of what we have.
Could be 1 iteration could be 100 (not likely if by 10 we couldn't make them happy we'd send them somewhere else.
Build final design 1-5 days
This is the final, no error, valid CSS/HTML/JS, everything is cross browser ect
Build final functionality 2-3 days
This code is "perfect" it works 100%, it is pretty, there are no known bugs, and the developers are happy to send it
This and Step 5 happen concurrently.
Deploy 10 seconds.
Then 2weeks, 2 months and 6 months later we do a review to make sure there have been no problems.
So if you skip the review this usualy takes 8-20 days, IDK how you'll work that into 100 days.
If we are just building an application (or extending one) for a client we would spend 2-3 defining EXACTLY what they need then however long it takes to build it.

Related

Software Rewrite-vs-Running Cost Analysis

The IT department I work in as a programmer revolves around a 30+ year old code base (Fortran and C). The code is in a poor condition partially as a result of 30+ years of ad-hoc poorly thought out changes but I also suspect a lot of it has to do with the capabilities of the programmers who made the changes (and who incidentally are still around).
The business that depends on the software operates 363 days a year and 20 hours a day. Unfortunately there are numerous outages. This is the first place I have worked where there are developers on call to apply operational code fixes to production systems. When I was first, there was actually a copy of the source code and development tools on the production servers so that on the fly changes could be applied; thankfully that practice has now been stopped.
I have hinted a couple of times to management that the costs of the downtime, having developers on call, extra operational staff, unsatisifed customers etc. are costing the business a lot more in the medium, and possibly even short term, than it would to launch a whole hearted effort to re-write/refactor/replace the whole thing (the code base is about 300k lines).
Ideally they'd be some external consultancy that could come in and run the rule over the quality of the code and the costs involved to keep it running vs rewrite/refactor/replace it. The question I have is how should a business go about doing that kind of cost analysis on software AND be able to have confidence in that analysis? The first IT consultants down the street may claim to be able to do the analysis but how could management be made to feel comfortable with it over what they are being told by internal staff?
We recently decided to completely rewrite large portions of our business code from scratch, and it has not gone as well as we had hoped. I've seen a lot of quotes saying you should never try to rewrite anything from scratch, and now I see why. I would recommend starting small - don't try to rewrite the whole thing at once. Identify the large problem areas and focus on refactoring small portions of the system at a time. Since there is 30+ years worth of work in the system, it will take a long time to get it back to a reasonable state. We had about 5-8 years worth of work to rewrite, and it has been difficult. I can't imagine 30+ years of work!
First, the profile of the consultant you need is very specific. Unless you can find someone who worked in a similar domain with the same languages, don't hire him.
Second, there's a 99% probability (I like dramatic numbers) the analysis will go as follow:
Consultant explores the application
Consultant does understand 10% of the application
Time's up, time for the report
Consultant advices a complete rewrite (no refactoring, plain rewrite)
So you may as well make the economy of what the consultant will cost.
You have only two solutions here:
Keep with the actual source code but determine proper methods to fix problems so that you have a very long run refactoring that is progressly made by those who know the application
Get a secondary team to make a new application to replace the old one
If I talk about a secondary team, it's because you cannot bring just one architect to make the new application and have the old team working with him:
They're too busy on the old application
There will be frictions because the newcomer will undoubtedly underestimate the task at hand
I talk from experience, believe me.
If you go the "new application" way don't put your hopes too high. You'll end up with an application that has less than half the functionalities of the current one, simply because you cannot cram 30+ years of special case and exceptional situation fixes into a freshly design software.
Oh, also, if your developers happen to tell you they have a plan, by all means, hear them out. They most probably know what they are talking about.
The first thing that comes to mind is that you are prematurely addressing the rewrite/refactor/replace argument. The first step two steps I would recommend would be:
Unit tests
QA
It's well within engineering scope to implement these. Unit tests are an essential preliminary step before any reasonable refactor or rewrite could possibly take place. By 'unit test' I mean wrap each function call with corresponding code that proves the code works for all known conditions. In complex retrofits this may not actually happen at the most granular level but any automated tests will help immensely.
And QA - have an independent (and aggressive) quality assurance team that rigorously tests beta releases before production. Their test plans and test procedures become essential for any kind of replacement effort.
Once you've got the code under control, then you are in a position where the business can reasonably consider massive changes.
Just a note about your comment about external consultants - no consultancy will ever care enough about the code to provide realistic quality assurance. QA ends up being married to the hip of business defending the company bottom line. It's an internal function ultimately and an external consultant can't provide much more than getting you started really.
I think that your description provides all of the necessary information on code quality (lack thereof). The fact that so many support resources are required also indicates the high costs involved with maintaining the existing system.
As I answered here, a good approach to consider is refactoring one piece of the system at a time until everything works at an acceptable level. I agree with Joel re not throwing away existing code (see Things You Should Never Do. Parts of your code work, so you should leave those in place whenever possible, and focus on the sections that lead to downtime.
Andy also makes a great point about starting small as well.
Another thing to try, is reviewing the processes around the system. When you do this, you should try to determine what failure situations are caused directly or indirectly by user action?, are there configuration or environment problems? If you are having trouble fixing the code directly, then you can still prop it up by dealing with external issues more effectively.
Read the book Working Effectively with Legacy Code (also see the short PDF version) and surround the code with automated tests, as instructed in that book.
Refactor the system little by little. If you rewrite some parts of the code, do it a small subsystem at a time. Don't try to make a Grand Redesign.
The code has been around for 30 years?
Development paradigms have shifted substantially in the last three decades in many ways, and most relevant to your predicament, I feel, is in terms of the amount of time (in man days) required to create something to input->process->output something.
300,000 lines of code 30 years ago, could probably fit into 100,000 lines or less today, and expending fewer man hours(?) This could seem optimistic/ridiculous to some, but on the other hand is achievable, depending on the type of application in question. You have given no indication as to the classification of system - is it a real-time manufacturing process control system of sorts with sensors and actuators tied to it? An airline booking system ? Does it post-process some backlog of data? In other words could it be rebuilt in something like Java and quickly with an agressive, smallish team? Have the requirements been documented, and if so do they need updating or redeveloping from scratch? Is human safety a factor?
Just a quick sanity check, I think whether or not you should rebuild depends on (any order means the same thing):
Number of code dudes required.
Level of expertise of said dudes.
Which languages do not fit.
Which languages do fit.
How much it costs to use chosen language(s) them in terms of hardware and software.
How much does the business depend on this to stay alive.
Is it really too much downtime, or are you just nitpicking? (maybe they really don't care, but pretend to).
Good luck with that!

Why do many software projects fail today? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
As long as there are software projects, the world is wondering why they fail so often.
I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years.
You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved".
EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time.
In my opinion the 10% + / - is a good range for an offer / tender.
EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams?
When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...)
I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"?
EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement.
Clean Code Developers live principles
like SOLID every day. A professional
developer is the biggest reviewer of
his own work. And he has an internal
value system which helps him to improve and become better.
Check it out on: Clean Code Developer
EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.
Bad management.
Projects are not successes or failures based on some underlying feature of the project, but on whether they fulfill the needs of the users. (They can fail altogether, in which case there was a gross misstatement of what was possible.) It is mostly in the process of evaluating the feasibility and cost-benefit ratio of the project, and establishing goals, that software projects tend to fail or succeed.
There's a disconnect between people who deal with facts and things (like programmers) and people who deal with other people (like sales types and managers). To a programmer, the facts are the facts, and have to be dealt with. To a sales person, the facts are what other people think, and are changeable.
There's also differences between tangible and intangible facts. Nobody thinks that workers could build a large bridge in a month if they were really motivated; they can see all the steel and concrete and other stuff that has to be moved and fixed into position. Software is much less tangible, and lacks the physical restrictions: while it is not even theoretically possible to build the bridge within a month, it is conceivable that a team could create a large project within a month, as "all" they have to do is get everything right the first time. It is physically possible to type thousands of lines of code a day; it's just that the chance that they're usable as is is so close to zero it doesn't matter. The actual productivity of a top developer is actually pretty unimpressive in word count, compared to (say) the productivity of a journalist.
Therefore, those who are used to flexible facts don't have the imposing physical limits to remind them that things can be pushed only so far, no appreciation for what programming actually requires, and no good feel for how much productivity is realistically possible. Moreover, they know how to get their way in negotiations, much more than the average developer, so in negotiations about what's possible they tend to assume more than they can, in the real world, get.
In the meantime, software development is inherently fuzzy, because producing the physical product is trivial. I can produce a copy of software quickly and cheaply, once it's been developed. Software development is design work, pure and simple. Anything corresponding to manufacturing is ruthlessly eliminated with such things as compilers and wizards and code generation. The developer, faced with the manager who wants the impossible, finds it hard to say the impossible is actually impossible, because there's no way to say it's actually impossible. Given facts that are unknown enough to feel flexible, the person with strong negotiating skills and determination will typically get the answer he or she wants.
Given this disconnect, one might ask whose responsibility it is to bridge it. The answer is, in my opinion, clear. The responsibility for understanding how different people think belongs to the people who specialize in dealing with other people. The responsibility for coordinating different types of people belongs to the people whose job it is to coordinate these things. Therefore, managers.
Managers who do understand software development and developers, and can deal well with other managers, will do well, and their projects will generally succeed. There are still far too many of the other type in the world.
Not a direct answer, but I found the Virtual Case File to be a fascinating case study on how a big government-backed well-funded project can still tank.
You can also add your top reason why a
software project fails.
Another IEEE Spectrum Online article "Why Software Fails" examines this very question. It summarizes the major points as follows:
Unrealistic or unarticulated project goals
Inaccurate estimates of needed resources
Badly defined system requirements
Poor reporting of the project's status
Unmanaged risks
Poor communication among customers, developers, and users
Use of immature technology
Inability to handle the project's complexity
Sloppy development practices
Poor project management
Stakeholder politics
Commercial pressures
Poor planning.
Hofstadter's Law
It always takes longer than you expect, even when you take Hofstadter's Law into account.
Mismanagement.
SW project get started by throwing developers against a perceived problem. Business requirements crystallize as the project progresses. New functionality gets added while deadlines stay put. More developers are thrown in. Original project members quit or get fired. By this point too much time, money and resources is invested in the project so it cannot be canceled. As the deadline passes the project is declared finished and successful despite the obvious lack of finished product.
Come to think of it - I've jet to see a SW project fail...
Honestly, I think its because most programmers are not very good at what they do(and I don't mean just cranking out code). People on stackoverflow are probably the exception. I don't know about the rest of you but as a consultant/contract programmer I have worked in or around many places, and the ratio of mediocre or poor programmers to good ones is about 10 to 1.
One of my strengths has always been estimating accurately and then delivering on time and on or under budget - I always aim for coming in 10% under cost and on time. Then I like to tell my client that because I got things done for less $$ than expected, which of the "extras" would you like to add in?
Even a perfectly functioning product that is late and/or over budget will be considered a failure by many business managers. Programmers often focus on just the technical aspects of what they do, with little regard for the cost or deadline. You really need to do all three well for it to be deemed successful project. There are many other programmers that could code circles around me without a doubt, but for the person paying for the project, that is rarely enough.
It is because no-one seems to read anymore.
Every single reason why projects fails has been analyzed time and time again.
You only have to read three books to know why 80% of projects fail:
The Deadline: A Novel About Project Management (Tom Demarco, published 1997)
It's a great introduction and it's pretty entertaining.
Peopleware : Productive Projects and Teams (Tom Demarco, published 1987)
The Mythical Man-Month: Essays on Software Engineering (Fred Brooks, published 1975)
We as a profession simply seem to forget everything every 3-5 years (see "centralised computing is inefficient; let the clients handle it" vs cloud computing).
(From a programmers point of view - I'm not a project managemer, but I've often been involved in the process).
A number of people have mentioned that bad programmers are endemic. But I think this is true in another sense as well - we're all bad programmers in that we find it difficult to anticipate complexity, an unavoidable issue that 50 years of magic bullet estimation and planning schemes have failed to solve.
Anticipating the side effects of large projects gets exponentially more difficult as projects grow. This is a dull truism, for sure, but for me it means that on any project I've worked on where I've been involved in the estimating process I've run into some case where there's an unanticipated consequence of a design decision that causes everything to come to a grinding halt, or at least a few days of bugfixing - just something that nobody foresaw, not any sort of malpractice or stupidity. It's just the nature of a complex enough system.
Aside from the built-in uncertainty, there's also a tendency to underestimate things whose outline is known, because the fact that they have less uncertainty makes them seem simpler to implement.
So the uncertain stuff gets magnified, the clear stuff gets minimized, and what really kills you is the thing that you didn't think would be uncertain.
The number one reason: a failure of project management.
A PM's raison d'etre is to make a project succeed, ergo a project failure is their failure. Certainly there are factors beyond their control, but it's still within the PM's job description to manage that risk, and the only get out clauses should be someone higher up the food chain taking decision control (which is a terrible thing to do to a PM) or acts of god.
In my experience failures mostly occur when PM work has been fast and loose or non-existant, including when decisions start to flow from sales people and when the client starts decreeing change control. A good PM is priceless.
Failure is a judgement -- more of an accusation, really.
"The project was more than 10% over budget and time."
Which budget? Which schedule?
6 months ago, I wrote a plan saying it would take 6 months.
3 months ago, the users asked for more stuff. I gave them a plan that said it would take 9 more months.
Last month I was told that the project was 6 months over budget and therefore a "failure".
But wait. It delivered what the users wanted. It was over the "original" estimate. It was under the revised estimate. The users want more. IT wants less.
I'll approach it from a different aspect than most the rest here.
I've noticed a project slowly fail over a period of time. Sure, it's gotten better in that time too--but it still isn't profitable. In this market profitability, and being in the black, means success.
Why is it failing? I think it's simple: you get what you pay for.
Software is like a bank account, not primordial ooze. If you don't put resources into it (time, money, focus, effort) then you won't get anything out of it except failure and cost. So you must invest things into your project, and sometimes the earliest work sets the stage for years to come. You can't throw mud at your computer and expect a new mouse in two years and $10 million dollars later, so likewise there must be effort expended.
One of the biggest problems today are "budget developers" in a third-world country. I don't begrudge them their part of the market, but for a well-funded Silicon Valley startup to seek them out and get a budget foundation (framework or even prototype) is to make a poor investment in the future. This very same budget framework is what is causing my friends so much of a hassle today. It works now; it worked when it was written, but it wasn't written well and few even take the time to maintain it. Were the company to stop and rewrite the software the way it should have been written in the first place they wouldn't have all this trouble. Can they afford the time? Nope. They have to make it profitable before they can even thing of it.
As the saying goes, "I can make it: cheap, fast or good. Now, pick any two of those." Everyone wants all three, myself included. But if you don't invest the time, planning, and work required to make your project a success from the start ... then don't expect anything you can be proud of later. It'll feel like a forged Mona Lisa where you, and every other engineer like you, can see a defect here and there that shouldn't have been there from the start.
So:
Don't undertake what you cannot afford in: time, money, effort, focus, etc.
Don't skip planning!
Don't be afraid to rewrite early when it counts the most. (Later it'll be worse than a trip to the dentist, believe me.)
Don't underestimate the power of bureaucracy to prevent you from doing it right.
And don't be cheap where you should spend the most of your time. It will cost you later, guaranteed. And if not you, then someone else will take the bullet for you.
One common mistake is that sales people and technical people do not communicate sufficiently. So the salespeople sell things that are technically not feasable within budget. (And then they run with their bonus :) )
Being over budget and time is not a good definition of failure (and actually being in budget and time doesn't always mean success). Take the following examples provided by Hugh Woodward, PMP, in Expert Project Management - Project Success: Looking Beyond Traditional Project Metrics:
Sydney Opera House: With its graceful sails dominating Sydney Harbor, the Sydney Opera House is arguably one of the most recognized buildings in the world. Yet, from a project management perspective, it was a spectacular failure. When construction started in 1959, it was estimated to cost $7 million, and take four years to build. It was finally completed in 1973 for over $100 million.
[...]
Project Orion: This massive effort to develop Kodak's new Advantix photographic system was reputedly very well managed from a project management perspective. PMI recognized it as the 1997 International Project of the Year, and Business Week selected the system as one of the best new products of 1996 (Adams, 1998). But Kodak's stock price has fallen 67% since the introduction of the Advantix system, in part because it failed to anticipate the accelerating switch to digital photography.
Corporate Intranet: Finch describes a project that involved the implementation of a corporate intranet to globalize and improve communications. From a traditional project perspective, it failed to meet its success criteria, but not significantly. It was one month late and believed to have been accomplished with a small budget overrun. But both the project manager and senior management viewed the project as successful. The hardware and software had been installed successfully with a minimum of disruption, thereby providing all staff members with access to the corporate intranet. Following implementation, however, employees made only limited use of the intranet facilities. The main objective of the project was therefore not achieved. In this case, both the project manager and senior management focused on an objective that was too narrow.
[...]
Manufacturing Plant Optimization: A paper manufacturing company with five plants across North America decided to increase its manufacturing capacity by embarking on a de-bottlenecking program. A project team was formed to install the necessary equipment, and charged with completing the work in 18 months at a cost of $26 million. But almost immediately, the project team was asked to defer major expenditures until an unrelated cash flow problem was resolved. Rather than stop work completely, the team adopted a strategy of prototyping the technologies on which the de-bottlenecking program was based, and actually developed some cheaper and more effective solutions. Even when the project was authorized to proceed, the team continued this same approach. The project eventually spanned five years, but the resulting capacity increase was three times the initial commitment. Not surprisingly, the company immediately appropriated another $40 million to continue the program.
[...]
So in these examples, which ones were truly successful? Examples like the 2002 Winter Olympics and the Batu Hijau Copper Concentrator would suggest that these are truly successful because they not only met the traditional project managers' definition of success, but also met the projects sponsors' perception of success.
As we start to look at the examples
like Project Orion, the Corporate
Intranet and the Laptop Upgrade, we
notice that the traditional metrics
start to fail. These projects are
considered successes in project
managers' definition of success, but
failed at meeting the sponsors'
success criteria. The project Orion
example is quite astounding as this
project was recognized by PMI (Project
Management International) in 1997 as
the International project of the year.
Yet it did not increase Kodak's
revenue, because they did not foresee
the adoption of digital cameras.
Most interesting are the examples of
the Manufacturing Plant Optimization
and the Sydney Opera House. They both
failed to meet the traditional project
managers' success metrics but were in
fact considered successes. This is
particularly shocking when you see
that the Sydney Opera House had a
"cost overrun of 1300%" and a
"schedule overrun of 250%".
Once we realize that projects can fail
to meet the traditional metrics of
success, but still be successful to
the stakeholders, this creates a
quandary for the project manager. How
does one really define success? Is it
possible that a "Challenged" project
could be canceled that would have met
the sponsors' needs? Is it also
possible to identify a project that
should be canceled that is currently
on time, on budget and meeting the
defined needs?
What do you think of that conclusion? How do you really define success?
My answer is rather unusual from the rest of this, but quite common around here: killer bugs. I had a project go an extra two weeks (50% extra time) because of one switch in source that I didn't know about until I dug through the source code (there was nothing documented in help or on the web).
People/companies do not proudly shout about their failures, so many cases wont be ever heard.
Poor use of practices and software development methods. In my experience, one of the big reasons a project failed its that the development team use a wrong method to face the software development process. Choosing a methodology without having a good understanding of how it works and what it takes, can bring a time consuming issues to a project, like poor planning.
Also a common problem is also the use of technologies without a previous evaluation of it to understand how it can be applied, and if it brings any value to the project.
There have been some good studies done on this. I recommend this link from the Construx website (Steve McConnells company).
The Construx link above is real good!
Many projects are managed on a rosy picture of reality. Managers tend to power talk developers into optimistic estimates. But say a 20 week projects gets "talked down" to 10 weeks. The requirements phase will now be 1 week instead of 2. After 1 week, the requirements aren't finished, but the project moves on. Now you're working on shaky ground-- and on a stretched schedule!
This can be funny. Once there was this old guy in a room opposite mine. His job title was system adminstrator. But the system he was supposed to adminsiter wasn't there. It had never been finished, although management thought it had been. The guy played games for about a year before he got bored and moved on.
The funniest part? Management put up a new job opening after he left!
IT Project Failures is a blog about project failures that may have a few answers here if one wants to read about it.
Myself, I think a large part of this lands on the question of being able to state exactly what is expected in x months at $y when really the answer is much more open-ended. For example, if a company is replacing an ERP or CRM system, there is a good chance that one isn't going to get all the requirements exactly right and thus there will be some changes, bug fixes and extras that come from taking on such a large project. For an analogy consider how many students entering high school or university could state their precise schedule for all 4 years without taking any classes and actually stick to that in the end. My guess would be very few do that as some people change majors or courses they wanted to take aren't offered or other things happen that change what the expected result is but how is this captured in a project plan that we started here and while we thought we were going there, we ended up way over in spot number three.
The last statistic that I heard was that 90% of projects are either over time or over budget. So if you consider that failing, quit a bit.
The reason why it fails mainly lies on process. We as software engineers don't do a good job of gather requirements, and controlling the customer. Because building software is a "mysterious" job to people outside of IT, it is difficult for them to understand why last minute changes are difficult. It's not like building a house and clearly showing them why it isn't possible to quickly add another door to the backside of the house made of brick.
Not only software projects go over budget, or take more than scheduled time to complete. Just open the newspaper and look at public projects like bridges.
But for a software project it is far more easy to cancel everything. If a bridge or building is half finished, there is no way back. Half the infrastructure is in place. It is very visible and it takes money to remove it.
For a software project you can press Shift-Delete and nobody notices.
Its just a very hard job to do an accurate cost analysis.
Using the FBI's Virtual Case File system it comes down to poor program management. The program had pre-9/11 requirements and post-9/11 expectations. Had government management done their job there would have been contract mods.
http://government.zdnet.com/?p=2518
"Because of an open-ended contract with few safeguards, SAIC reaped more than $100 million as the project became bigger and more complicated, even though its software never worked properly. The company continued to meet the bureau’s requests, accepting payments despite clear signs that the FBI’s approach to the project was badly flawed, according to people who were involved in the project or later reviewed it for the government."
Although $105,000,000 for 700,000 lines of code comes to $150 per line of code. Not too bad.
To truly gage whether a project is really successful, the original estimate / budget is probably a poor yardstick. Most engineers tend to underestimate how much time it will take because of political pressure, and they don't know how to estimate anyway. Many managers are poor planners because they want unrealistic deadlines to please their boss, they often don't understand what's involved, and plus it's something they can look at and understand and use as a stick in meetings, as opposed to actually helping solve problems. Practically, it helps businesses make rough projections of expense for budgeting purposes, at least it gives them something the go by.
A better measurement of project success would be - are the users happy? Does it help the business make money? How fast will the money gained help recover the cost of the system? These are harder to gauge than a simple plan, though.
In the end, I've find you're better off delivering on deadline, even if it means working some overtime. It sucks, but that is the reality of it.
As stated previously the vast majority of people involved in software development to not actually understand how
ask the right questions to learn about the problem
appreciate the users goal and judge expectations
understand technology available and the related Eco structure
most of team directly/indirectly involved are poorly skilled.
do not appreciate or know when they are wrong do they can take action.
Even with perfect requirements and related definitions too many developers don't know what they are doing.
Just look at the types of questions asked here. Would you go to a doctor that asked the same equivalent question. The scarey thing is that they ask and don't know how to learn or reason.
Different agendas
Management do not really understand what a developer does, how they produce code and the difficulties encountered. All they care about is the final product delivered on time. But for a developer they care about the technical aspects, well produced code in a solution they our proud of.
Deadlines
I've often heard developers say they wish they could produce better code, and that deadlines often push them into producing something that just works, rather than good code. When these deadlines are unreallistic the problems are exacerbated.
I think this thread managed to gather the biggest group of tallented unhappy software developers, engineers, project managers, etc. that it is possible to gather in one place.
I share point of view with most of the posts stuck here and I think they come out of a lot of suffering through seeing co-workers not doing a good job when job (programming) and success is the most important part of our lives.
http://www.clean-code-developer.de/
They have an incredible cause! and their philosophy, if taken ahead, could manage to create a new layer of heroes, as the main stream of developers and IT professionals is so ROT these days.
I'm working on something similar here in Brazil, 'cos I love our profession of bringing software alive both as PM and software developer (.NET) and I can't stand people who face programming as nothing else but they way out to make big money with almost no effort.
... sure I don't consider going overnight in front of the computer geniality. but the few who matter, have done it more than once.

Coming up with time/staff required estimates for converting a desktop app into a client-server app

My boss is bidding on a project to convert a desktop application into one that runs online as a client-server application. The original app has a little more than a quarter of a million lines of C++ (MFC) code that's not cleanly divided between engine and front-end.
I need to come up with estimates of how long it will take and how many people would be needed for such a project. We don't have anyone on staff capable of doing this project, and we don't have any Windows machines, so we'd have to subcontract/outsource.
Alternatively, what arguments can I use to convince my boss that this project is a bad idea?
I would take a very long time, as most likely, you would have to change languages. It would basically be a complete rewrite. You may even just be better off defining what the old system did, and rewrite it from scratch, only referring to the old code when you need to duplicate business logic.
Without a clear division between the engine and the front-end, it's a bad idea. Giving over the code to a contractor or such would require a large amount of training just to understand the code, including parts that are meant to be rewritten as web apps.
You would first need to estimate how long it would take to clearly define that boundary, and then you can work out how long the rest of the project would take. That way you can train the contractor etc. on the parts that relate to the presentation tier and the API for the engine without explaining how the engine works (which should be irrelevant to the presentation components).
give the info to qualified consultants/outsourcers (send it to me, for example) and consider the ROI based on their estimates
it definitely sounds like you don't have the ability to do this efficiently in-house, but that doesn't mean that it is a bad idea - it depends on the ROI
for example, if your boss thinks this thing could generate $1M in revenue in the first year and a contractor could do the conversion in 6 months for $250K, the ROI (300% in 18 months) is probably good enough to do it
if the numbers are reversed, then it is obviously a Bad Idea ;-)
This sounds like a job for COCOMO II. COCOMO has been used successfully to estimate size and cost and staff size but it could be a pain. It requires a lot of details and time. But it does answer the question of how to go about estimating the size of the project and the number of people required and the time it will take.
The only way to properly estimate this is to give it to someone who 1. will be working on it; and 2. understands the language it's in as well as the language you are converting to.
Regardless, it is ALWAYS a BAD IDEA to bid on projects for which you have no experience. Every single time I've seen someone do that they badly underbid it. Not just in terms of the amount of work required, but also in the learning curve necessary to even begin to understand the underlying processes.
Especially beware of bidding a project you have no experience for then explicitly telling a subcontractor what their budget is to get a project done. I would never work under those conditions, again, because 100% of the time the person doing the bidding is going to get it way wrong.
One more thing, if it's a desktop app, with a backend, then it's already client server... Are you meaning to take an existing client/server app and convert it to a web app?

How do you bring a failing project back on track? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
You must have heard the archetypical story of a failing/failed project:
A team of inexperienced programmers work 24x7
Bugs are fixed only to introduce new bugs
Customer is screaming that he could not even do the basic stuff (Saving/Querying) etc.
Programmers used to having the spec handed down struggle to improvise
No automated unit tests aggravate the situation
Architecture document that looked nice on paper was not followed in practice
Third party components used become bottlenecks not having been tested for fitness in the first place
Milestone after milestone missed
The team is not able to come up with a delivery date as nobody agrees as to the quantum of work actually needs to be done
No technical leadership / or a Cowboy Coder that can take on the technical issues
Now, If you were to be brought in as #10 what would be your first steps?
Update: First of all: Thanks to you all for chipping in. Well... I'm being brought in as #10. I was the original Architect anchoring the solution when we made the proposal to the client. Then, unfortunately, I couldn't take on the delivery responsibilities as I was assigned somewhere else. :)
Let's say it's a webification of an existing desktop application. I'm now being brought in as #10. Running away, sadly, is not an option. I'm sure this can still be reversed by following agile best practices and just wanted to tap the community for ideas.
The larger question perhaps is this: If the development team does not have specs but only the (baselined) code for a running application, the original solution called for looking at the code and extracting business rules on the fly. Now, the inexperienced programmers are reluctant to look at VB 6.0 code and want documents! So how do you fight this if you were to instate Agile processes?
Vyas, I feel like I could have written this question. My previous job involved resurrecting a KVM project that had failed after a year's development. Specs were in the form of a user manual and developers' experience with similar products. I ended up teaching C to 3 assembly programmers and re-architecting from scratch. We brought the product successfully to market in 4 months. (Then I resigned. Go figure.)
Some of the things I'd do again, particularly with an inexperienced team:
1. A team of inexperienced programmers work 24x7
10. No technical leadership / or a Cowboy Coder that can take on the technical issues
Give them a (short!) break from the project to "recharge." Maybe a day, maybe an afternoon, or maybe a long lunch on you. It will mark the end of the "old" project and the beginning of success.
Get their agreement to work their butts off when they return, and promise that you will be their go-to guy, cheerleader, and flak jacket. You, collectively, are a team, and your job is to forge their path, eliminate distractions, and lead them.
Plan an immediate success, no matter how small, and maintain a "can-do" attitude.
8. Milestone after milestone missed
9. The team is not able to come up with a delivery date as nobody agrees as to the quantum of work actually needs to be done
3. Customer is screaming that he could not even do the basic stuff (Saving/Querying) etc.
Take small bites! Break each piece down as far as possible, then deal with the small components. You'll identify "gotchas" early and be better able to scope the whole project.
Define your interfaces. Anytime you can isolate a chunk, do it. This allows parallel development, because you've already decided on parameters, preconditions, assumptions, what happens inside, and return values. You can stub it out, and build other modules and tests independently.
Prioritize. Focus on the defects and issues that affect the customer first. New features come last. If necessary, defer features rather than delivering buggy code.
Assign responsibilities. Volunteers are preferred, each in his/her area of expertise, but one person must be accountable for each task.
Track defects, and record everything that will help you reproduce, locate, and fix them. Document any that remain at delivery time, so the customer won't be surprised.
4. Programmers used to having the spec handed down struggle to improvise
6. Architecture document that looked nice on paper was not followed in practice
You will create the spec details as you go, each piece just before it's needed. It needn't be pretty, complete, or even written, as long as everyone understands the current task and you've got the big picture.
Discuss the implementation, one piece at a time, when the developer is ready to code it. Write the skeleton yourself if necessary, and let the team fill in the "guts." You want to keep them focused on each task, without "improvising."
Be available to answer questions as they arise. Your primary goal is to keep the team productive.
2. Bugs are fixed only to introduce new bugs
5. No automated unit tests aggr[a]vate the situation
Plan and start unit testing ASAP. If possible, enlist resources outside the team.
Fix small problems before they grow larger--or get hidden. Confidence in each small piece builds confidence in the whole.
7. Third party components used become bottlenecks not having been tested for fitness in the first place
Brainstorm solutions when you're not coding. Don't let them stop your progress if at all possible. Can you encapsulate or code around them? Replace them?
General suggestions:
Stay ahead of the team. Anticipate and try to solve problems before your team hits them. Gather any necessary information before it's needed.
Communicate constantly. Make it clear that you want no surprises, and solicit concerns, questions, status, roadblocks, etc throughout each day. Encourage collaboration and share "discoveries" across the team.
Celebrate every success. Compliment a clever solution, bring donuts when a problem is solved, demonstrate a new working feature ... anything that shows the team you appreciate them.
Get each task done, then move ahead. Don't waste time tweaking, enhancing, or reworking anything that isn't a direct barrier to success.
Keep your promises to the team, the customer, and your management.
Good luck -- please keep us posted!
Run away or find a new job. This is a death march and they need a scape goat.
Often, the death march will involve
desperate attempts to right the course
of the project by asking team members
to work especially grueling hours,
weekends, or by attempting to "throw
(enough) bodies at the problem" with
varying results, often causing
burnout.
Freeze releases, and start fixing issues with the program.... deal with the customer complaints by priority (the business side of the company can prioritize) and get the program running. Once you get the biggest issues out of the way, start cleaning up the code. Assign tasks to other developers, and start enforcing coding practices on all new code.
If you can do whatever you want, then look at what the real issues are and deal with them. If that means putting together a new team to develop the software all over from scratch, so be it. But you should try to at least fix the major bugs. Don't bother introducing new features, they only compound the problem, and a program that doesn't work and the problems aren't dealt with lose you clients.
Number 10 is obviously the worst problem, or at least the root of all others. Find someone with some creativity and ability to deliver a project, and give them free reign to do anything - including start over.
I hope you are getting paid really well. In any case, my plan would be something like these steps in the following order:
0) Stop adding features or functions across the team. Allow bugs to be addressed while the following steps are taken up to step 5, then stop bug fixing & resume feature development:
1) Apply what I call the Inverse Staffing Law: Weaker team members slow down the better and faster ones and generally a late software project needs people removed, not added. So, you need to assess the quality of the team members as individual contributors. Eliminate weaker staff from the team because presumably there are some. This is best done by reviewing their code and examining their bug fixes and figure out who is making the code worse vs. better and chop them for the team. This is not a time to mentor, you are going to need the best folks to have a change of "fixing" the situation in a optimal period of time. If you can't fire them or reassign them, have them getting coffee or something for everyone else left.
2) Assess the code itself. Identify areas of the code that are not constructed well and/or not well abstracted. If a area code is not constructed well and/or is obviously brittle at it what it is supposed to do, target it for a re-write. This feels painful at this point, but it will save you time in the long run. Recurring bugs and/or history of fixes will help identify the code that can't be salvaged. If a code area or module is fundamentally constructed well, but not abstracted well at the interface level, it should be suitable for re-factoring. This will save significant time and is useful code. Keep a list of the re-write areas, the re-factor areas, and the suitable areas.
3) Define a new reasonable architecture that you believe will result in a robust and complete solution to where you want to eventually be in features and functions. The architecture might not be optimal as starting clean, but in effect match up what you have with where you would like to be.
4) Work with the stake holders to decide what will make an acceptable first release attempting to table as many features as possible for "later" releases. Maybe you can't cut anything, but if you can, now is the time to do it.
5) Stop the background bug fixing efforts and assign the defined work out to the (remaining) team to estimate out a reasonable new implementation plan of the rest of the functionality. They need to own the schedule. Roll up the schedule and be fairly conservative. Now you have a reasonable prediction of when you could actually have something workable and robust.
6) Implement the remaining features and then harden up the release by tackling the remaining bugs. I am assuming all the normal good software development practices are observed here like source control, unit tests, etc.
7) Remove as many barriers as possible to keep the team cranking out stuff as fast possible.
8) Monitor for issues, and assist by getting your hands dirty where ever your can. Offer to take on the nastier issues to the extent you can help and still keep all members of the team as productive.
Good Luck!
This isn't about technical leadership any more, it's now about project management.
You as the technical lead will just be shifting deckchairs on the Titanic. So here's what I would do if I was the de-facto project manager.
1) Identify the project sponsors and stakeholders - both the official ones and the real ones.
2) Go to them and request that the project "goes dark" for a week.
3) If they don't agree, walk away from this project.
4) If they do agree, call a project time-out for a week - everything stops.
5) Spend that whole week talking to the important people on the project to identify the real project state.
6) Whilst engaged in those discussions, start formulating a project recovery plan, emphasising possible trade-offs between scope, schedule, budget, and personnel.
7) At the end of the week, decide which (if any) of your possible project scenarios are feasible.
8) Take the best of these scenarios back to the project sponsors and stakeholders, and start negotiating.
9) When a way forward is agreed, reboot the project and pray - possibly not in that order.
Common sense has already been pointed out to you by Maxim (Quit the death march). But if for reasons unknown you wish to persist, let me regale you with my experience in a similar situation - perhaps it might come useful.
It was my first job in a sleepy old town where good computer jobs where hard to come by and I despertely needed one immediately after college. I was hired coz the management thought i was enthusiastic enough and might be better than nothing (I offered to bring in my own comp to save them a cost of giving me a PC and offered to work for the experience alone)
The project had been abandoned by its creators due to the death march situation and had gone away after deleting all the comments in the code and performing other obfuscations. Nobody knew win32 / MFC stuff either.
I simply started studying the code on good old paper and pencil (lots of rubbing and corrections) until within 20 days time i knew the entire code including the variables by heart and what and where things where happening.
Armed with this knowledge i was able to make a critical piece working which had eluded everyone before. Of-course this was nothing but a drop in the ocean but it enabled the management to buy the clients confidence "smart fellow - got him with great difficulty - already got x working - u will have ur stuff working within y time".
Once the client was convinced and we where able to buy some time, some pressure was taken away. This got some hope back into the team and we started to hammer away for good. 6 months later i got promoted to project lead and 9 months later we had our fix shipment (lots of progress demos and a visibly more and more satisfied client in between).
As you can see, the elements of success are not directly duplicatable. But i would summarize that you need to breath some hope into the project first - show some progress and win confidence - that of your peers, management and the client. Once that is in place the technical stuff should be corrected too - there is nothing to replace this part of the equation.
If that does not seem likely, all that hard work (oh yes - lots and lots of work like you never imagined - why do you think its called a death march) would be a waste and you had better quit even before you start.
I had no choice and i was hot blooded and desperately need a job. The technical details where something icould work magic upon, and everthing just clicked into place. I really earned a lot of good will and self respect with that piece of work but in the long run its just a story i can narrate with great aplomb and nothing more except for those few in the know.
Things might be different for you but its for you to decide.
Good luck
Make sure you aren't the scapegoat
Cut scope creep
Trim functionality "requirements"
Implement a faster dev cycle (maybe Agile/Scrum/XP/whatever)
If you can, run away.
If not, you need to stop all activities that make the project unstable - including coding and fixing defects.
Assess where you are
Break up the requirements into much smaller "milestones"
Read some practical books (Mcconnell's "Software Project Survival Guide" comes to mind.
Identify all the problems and risks. Communicate all those to all involved.
Work on each piece one at a time.
Celebrate improvements and milestones as they are reached.
Good luck. Your scenario sounds pretty bad. It may not be salvageable - and things have to change to get better.
If you really had to get it on track (if bailing isn't an option)
Start off by accepting that it's a failure in management. You might then want to go on to implementing a strict but light process.
I'd suggest some form of Agile, since it's the easiest to successfully implement without a GURU, but you have to be VERY strict about it, including Pairing, Ruthless Refactoring, Reviews, Spiking functionality, Visibility, TDD, one-week cycles, 8-hour workdays (Yes, longer than 8 tends to harm productivity more than help, as you seem to have noticed)...
Don't be cutting anything out either. Parts of Agile rely on other parts--without the pairing, refactoring and testing you cannot eliminate upfront design (one of the biggest agile failures).
Don't forget about the management side of it. One week iterations to start (demo EVERY week). Constant adaptation. Very short stand-ups every day to address issues. (Keep to 15 minutes max, table longer issues, etc) Burndown charts, core-team with a client on it.
You can't just have a 15 minute meeting every week and 2 week iterations and call it Agile, but if you do it right, it just MIGHT give you a chance. You might get a GOOD agile consultant in to train you on getting started.
Also, constantly evaluate what works and what doesn't. Be prepared to fix what doesn't work. Weekly meetings to analyze that weeks' development successes and failures.
Overall it CAN work, and can bring a flailing team into line, but it's not trivial. The nicest part is that you can implement it without taking huge chunks of time out of your current development. You just keep developing, but you do it better.
Tough situation, you have zero customer trust and basically can't be successful under that situation, no matter what.
For all intents and purposes the project needs a reboot; the unfortunate fact is that incumbant shops usually don't get this oppurtunity to start over and re-evaluate everything that is there.
I hate to say it, but you need to halt development and spend a month working out what went wrong...
The result needs to be a plan for a feasible 6month - 1year delivery really making them focus on what the must-haves are and real trade studies on your third party components. And trashing the code base needs to be an option; start a new source control project and when you get to a particular module port peices that make sense and leave the garbage behind.
Agile is great and all, and a valid approach once you get a real plan in place; but its not going to fix a broken relationship with your customer... or all the junk that's already there.
Here's the summary of key learning after reading through your experiences:
Maxim
1: Make sure this is not a "Death March"
Ellie
2: Make sure what's delivered works
3: Refactor & Realgin codebase to Architecture / Best practices
4: Look at what are the real issues: Is the team technically competenet to deliver?
Kendall
5: Ensure availaibility of Technical Leadership
Bill K
6: Implement Agile Processes (At least automated unit tests if not TDD, short iterations that make progress visible)
7: Get customer buy-in
8: Be prepared to throw out what cannot work (wishful thinking aside)
Warren
9: Make sure the team memebers that remain given a chance to start over
Tim
10: Motivate team and as improvement becomes visible reward them
jsl4980
11: You need buy-in on schedule from your team (most imp.) & customer
[This raises more questions. What if your customer asks whether the team is competent enough to stick to your schedule? What if you yourself know that the timelines the team is proposing just shows their lack of understanding]
Ather
12: Is the team commited?
13: Do you formally QA?
Patrick
14: Start over, redesign and reconform to Architecture/Design best practices for modules yet to be developed.
The summary has 14 items. You can't do them all. So, what's the first step?
Here's what you have to do first -- get one thing improved.
You've got fundamental quality issues. (#2-5)
You've got architecture and component issues. (#6, 7)
You've got schedule problems. (#1, 8, 9)
You can tackle quality. Formal unit testing, heading toward TDD can help. This might be hard because architecture issues slow testing down.
You can tackle architecture. This might be harder because it will probably involve rework that will not appear deliverable. But it may fix quality issues. Or, it may be compounded by fundamental testing problems.
You can tackle schedule. Without other corrections (i.e., quality or architecture) you may not get any traction with fixing schedule issues.
I think that overall improvements in people's attitudes come from starting with one success -- any success -- as early as possible. What's the lowest-hanging fruit?
One long-standing bug? One unit test suite to find and fix that bug?
One major architectural feature? Would a diagram that everyone can post in their cube help? How about a presentation clarify things?
One new use case? One new feature that actually works?
Here's a good book on the subject:
Catastrophe Disentanglement: Getting Software Projects Back on Track
First off, be resolved that you may fail -- if you can't accept that, don't take the challenge. And that includes being a scapegoat (it does happen). Management won't look at it in those terms (i.e. they're not intentionally/consciously 'setting you up'). But that is a reality of a corporate environment; if you take on the responsibility (often with more pay than those that don't), then your head is for the block if things don't work out. You have to be ready to stick with it for the long haul too. I was once placed on a client site for 8 months to fix a waning project. And as you saw, one of the other blog-posters here spent 9 months before a release version was ready.
Now, assuming you are okay with the possibility of it going all pear-shaped in spite of your efforts, this is what I suggest:
a bug tracking system is going to be your number one best friend, it will allow you to regain a semblance of control. you can't hope to understand a complex system as a whole, so 'chunking' it will help. and a bug tracking system allows you to unitize problems and distribute them to the other guys you are working with.
you have got both technical and political challenges to deal with. the technical generally aren't so bad because you're a coder and you know how to do this. the political ones are much trickier, you're at the helm of a ship thats gone hopelessly off-course, and you're in the Bermuda triangle. the biggest challenge is often stemming the tide of negative sentiment amongst the client (e.g. client: "these cow-boys don't know what they are doing", "they promised me this and didn't deliver", "i have no confidence in these guys to any more").
for starters, apologize to the customer and tell them in concrete terms what you are doing to do to re-right their project, e.g. you: "I'm sorry about the delay on your project, I'm getting stuck into it now. I've looked at the project history, and personally, I would be angry too if I was paying good money for this system. the first thing I'm going to tackle is..." <- bingo, you've just taken responsibility for the project which means there's no turning back - its all or nothing now.
a few other people have said it here, and I agree; stop adding new features. what they haven't mentioned is that you may have to do this to keep the client happy (remember, there's a technical and political side to the challenge).
understand the business domain as best you can. read through any requirements documents you can get your hands on. you are at a massive disadvantage by coming onto the project late since you don't know what was originally discussed. the devil is in the detail. this is what sunk me on a late projects I wasn't able to salvage, everyone was on edge, and i missed a minor requirement. at the time, it wasn't a big deal and could have been corrected easily, but politically speaking, it was the straw that broke the camels back. one tactic which may help is to go out on client site for a few weeks.
understand that time is money. its not just a technical issue. the client has paid for something which isn't right or has not been delivered. your company has expended resources, possible having already used up all the project budget - the business is now losing money. and this is where the issue of new features come in again, yes - people are saying don't add them, stablise. but adding new features can be a politically helpful tactic, management will be happy because new money is coming in for off-spec work.
I'd recommend against you or your coding crew working ridiculous hours to deliver. if you normally leave at 5pm, leave at 6.30pm or 7pm instead. you and your coding boys can consistently maintain an hour or two of extra work for many weeks on end and perhaps 4-5 hours over the weekend. working until 9pm or 10pm every night will result in burn-out in roughly 2 weeks (some can go longer). after that point, your extra time on the project is doing more harm then good. in the unlikely event your boss takes issue with this, make a choice; do what they ask (i.e. work more hours), or say "I've already committed extra hours to working on this project - I'm here for the long haul and im going to get this project done if its the death of me. but that is the limit of how much time I'm willing to put in. i have other commitments to keep outside of work" <- but be ready for the consequences (remember, political situation as much as a technical one).
there are people here that are saying "stop and write a spec, stop and do this..." - I'm sorry guys, i just cant agree with you here, its unrealistic. the project is already stagnating, the last thing management or the client wants to here is "we have to stop everything and...". I've tried this before, where I've said to the client and management "the bugs will keep coming until we stop and i write up a detailed system test plan. it will take me two weeks" - the client didn't want to pay for this, and management wasn't willing to wear the cost. as it happened, the bugs kept coming.
learn to 'juggle' - you have to map out tasks ahead of time so programmers aren't waiting on you. this will generally mean you do less coding yourself. generally this is best achieved by having a project schedule before coding starts. programmers should know what they are doing next after they finish what they are currently working on, and they shouldn't be coming to ask you "what do i work on next?", they should already know.
build-in recovery utilities, especially if the software has recurring problems which are hard to pin-down. for example; it may take 12 hours to track down a bug and fix it, it may take 2 hours to put in utility (read 'hack') to fix the problem for the time-being. time and momentum are of the essessen, and unfortunately bandaid fixes may be needed.
be very observant of the clients mood. they need to know you are 'on their side' (e.g. client: "the product is unacceptable", you: "i agree, i would kick our asses to if i was in your position. all i can tell you is im on it and wont rest until its all working"). when the client is back on your site, they will actually start helping you. for instance, they may shield you from pressure from your management.
lead your guys by example. something along the lines of "I'm staying back a bit to work on the project, I'd appreciate the help if your willing to stay back too" and "i know its not our mess, but we're still going to clean it up anyway. i want the client to get some good quality software". programmers could generally care less about the company that got them into this situation, but they may care if its about one of their own or the client ('may').
many of the suggestions I've seen here assume a fairly high degree of power (e.g. 'stopping the project to restart it properly' or 'say no to new features') - you are starting the project already hamstrung, and as a programmer, you will traditionally have less power to affect change then a true manager. this doesn't mean 'give up/don't try' - it just means you are going to have to be creative and do things you don't normally do (i.e. use 'soft' or people skills).
a lot of people here are saying bail on the project, run for the hills. I have been on 3 hopelessly late projects to date. i managed to fix 2, and 1 I couldn't fix. personally, it doesn't bother me to take on a late project. after all, the worst that can happen is you get fired :)
If you were involved in the project from the beginning, I hate to say it, but the company should replace you (and the entire team).
It should be reanalyzed with a competent team with real project management processes and lead by a project manager with experience in this situation.
None of the original coders should work on the 'new project' of saving it. They can move to other projects (they don't have to be fired) but to get a fresh look at the project, everybody should be replaced.
And of course, management has to understand and be on board with the fact that the project is going to be much later than expected. If management doesn't agree with this (replace team, find experienced leadership, take a step back and start again) then #Maxim is right - get out of there.
1) The first thing I will assess is whether the people on the team are committed to the project or not? If not, it is worthless to do any other thing. Nothing can prevent the disaster unless I get a dedicated and committed team.
2) I'll make sure that there is QA on the team.
3) Come up with a reasonable plan of iterative and incremental releases to the customer. With the mess we are in, there is no way customer can get everything soon. Based on the priorities of customer, we'll deliver smaller increments of functionality to him frequently. This will keep customer engaged, a bit less-edgy since he is seeing something happening.
What ever you do, do it step by step.
First, it's not about addind features, it's about fixing the app. Don't add anything new. Just refactor. Say no to any new stuff somebody ask you to introduce in the system.
Don't try to improve the whole app. Take your team, make it focus on one aspect at the time, with the best practices you can, especially using unit test.
Use test driven development only. In that case, it will immediately show you what part of the behavior you don't understand (you can't code a test if you don't know what to test.
So here are the road map :
Identify the critical part you need to change
Isolate the code that implies this behavior
Find any occurence of this code in the rest of the code
Refactor using this knowledge and massive TDD
Integrate, test and fix until this particular part works
Go back to step
Make the situation clear to your boss : it will take time, money and will be painfull. Explain why, what you will do, and that you have no other way or it will fail AGAIN.
A above all, don't try to make it clean the first time. Refactor what you can, but don't expect to change the entire architecture of the part you are working on the first time. You will have to iterate the process on the whole application several times.
No miracle. Just method and patience.
Been there, followed these steps:
Stabilize
gather the real story: how good/bad is the codebase, how good/bad are the developers, what really needs to get done (bare bone min.), when it needs to get done by
reduce overtime (tired people, good or bad, don't work well)
remove the bad, input new/good - err on the side of replacement (many could be burnt out and appreciate even a forced change)
remove access to bad/un-required code (focus on the 20% of the code base that provides the 80% of the value)
put base code practices in place ensuring only good code is getting in (don't damage the base anymore)
Control
implement teams focused on the app components (decouple as much as possible)
put code management, release management, risk management, QA, etc. in place (build your environment so you can succeed)
get on your clients/sponsors good side - delivery a win, even if it's a somewhat stable very very small release - and then put in change management (control what gets requested)
Move forward
develop a plan (planning is essential, plans are useless according to Ike - you need to plan to find what is missing and to set a target, but don't expect to tell the future) - continuous planning is required
aggressively manage your people - good people make good product - make sure you get and retain the best
refactor over time - clean up code as you go - you may not have the luxury to fix everything at once so do it overtime to provide for a cleaner code base
move forward bravely - overtime be more aggressive with your deliveries test (but not stress) your team
Agile refactoring. Identify and prioritize what customer wants and then create the most important stuff in short sprints out of existing code. Good luck man :)

I need this baby in a month - send me nine women!

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
The proposed individuals to be added to the project must have:
At least a reasonable understanding of the problem domain of the project
Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
Have good communication skills
Be highly motivated (e.g. be able to work independently without prodding)
The existing team members must have:
Excellent communication skills
Excellent time management skills
The project lead/management must have:
Good prioritization and resource allocation abilities
A high level of respect from the existing team members
Excellent communication skills
The project must have:
A good, completed, and documented software design specification
Good documentation of things already implemented
A modular design to allow clear chunks of responsibility to be carved out
Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
Were late before you started (more
stuff than time) and/or
slipped 1hr, 1day at time.
Hope that helps!
It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
Maybe if the following conditions apply:
The new programmers already understand the project and don't need any ramp-up time.
The new programmers already are proficient with the development environment.
No adminstrative time is needed to add the developers to the team.
Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
If the new people focus on testing
If you can isolate independent features that don't create new dependencies
If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
I suppose the adding people toward the end of the work could speed things up if:
The work can be done in parallel.
The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
Self-contained modules that have yet to be started
Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.

Resources