Differences between Agile, Incremental and Waterfall development model? - models

What is the key difference between agile and Incremental and waterfall models?
As a beginner software developer what model should I follow?
I need to be clear.

In addtion to the Gishu's answer
Incremental - you build as much as you need right now. You don't over-engineer or add flexibility unless the need is proven. When the need arises, you build on top of whatever already exists. (Note: differs from iterative in that you're adding new things.. vs refining something).
Agile - you are agile if you value the same things as listed in the agile manifesto. It also means that there is no standard template or checklist or procedure to "do agile". It doesn't overspecify.. it just states that you can use whatever practices you need to "be agile". Scrum, XP, Kanban are some of the more prescriptive 'agile' methodologies because they share the same set of values. Continuous and early feedback, frequent releases/demos, evolve design, etc.. hence they can be iterative and incremental.
Waterfall involves discrete development stages: specification,
design, implementation, testing, and maintenance. In principle, one stage must be
complete before progress to the next stage is possible.
Selecting a process is difficult sometimes.Choosing the right Software development life cycle model Read this article it is helpful.

Waterfall is sequential while agile is an incremental approach.
Waterfall: Conception, initiation, analysis, design, construction, testing, implementation and maintenance. All eight steps will be done in sequential manner (one after another). Once a step is completed, you can’t go back to the previous step. If you make a little change, the whole project will start from zero. So, there's no room for error or change.
When to use waterfall:
If the client has complete knowledge of what they want (size, cost & timeline of project), then go for waterfall.
Advantages:
If any employer leaves the job, the new employer will suddenly grip down the project as all the steps are sequential, so new resources will easily understand the current situation of the project.
Client knows what the final product will look like.
Disadvantages:
If one step is completed, you cannot go back.
Waterfalls demand heavy initial requirements, false requirements will lead your project to somewhere else, not to destination.
If any error is found or any change needs to be made, the project has to start from the beginning.
The whole project is only tested at the end, if bugs are written early but discovered late, their existence may have affected how other code was written.
Agile: Developers start with simple design and then begin to work on small modules. The work on these modules is done on a weekly or monthly basis. After completion of a module, module sent to testing phase, and if any bug comes, then developer first removes that bug and then result is deployed in order to take client review, if client demands any change then first developer has to implement that change. At the end of each module, project priorities are evaluated, on which module we should start work.
When to use agile:
When rapid production is more important than the quality of the product.
When there's no clear picture of what the final product looks like.
Advantages:
Each module is tested after its completion that trains developers not to make such mistakes in the next module.
Agile allows developers and clients to add change at any time.
After each module, the client reviews the application, so the client knows about the progress of the project after each module.
Disadvantages:
It highly demands the successful project manager. As making modules, prioritizing them and setting time period of a module really requires a lot of experience.
As no one has initial requirements, so the final product can be grossly different than what was initially intended.

Related

Having one code base for multiple regions and clients

Context:
We built a data-intensive app for US region for just one client using ASP.NET MVC and now we are slowly moving to ASP NET Core. we have a requirement to develop similar version for Canada, our approach was to maintain two different code bases even though the UI is 70% same.
Problem:
Two code bases seems maintainable but were ending up doing double work if a generic component has to changed. Now we have multiple clients coming from multiple regions and UI can be a little different by client and region and we are bit confused on how to architect the such an app with just once code base.
I am not sure on what would be a maintainable and scalable approach.
One approach is having an UI powered by rules engine that is capable of showing and hiding the components. How maintainable is this approach in deployments perspective?
what would be other approaches to solve this problem?
The main approaches I can think of are:
Separate code bases and release pipelines. This seems to be your current approach.
Pros:
independent releases - no surprises like releasing a change to Canada which the other team made for US
potentially simpler code base - less configuration, fewer "if (region == 'CANADA')..."
independent QA - it's much simpler to automate testing if you're just testing one environment
Cons:
effort duplication as you've already noticed
One code base, changes configuration driven.
Pros:
making a change in one place
Cons:
higher chance on many devs working on the same code at the same time
you're likely to end up with horrible 'ifs'
separating release pipelines can be very tricky. If you have a change for Canada, you need to test everything for US - this can be a significant amount of effort depending on the level of QA automation and the complexity of your test scenarios. Also, do you release US just because someone in Canada wanted to change the button color to green? If you do then you waste time. If you don't then potentially untested changes pile up for the next US release.
if you have other regions coming, this code quickly becomes complex - many people just throw stuff in to make their region work and you end up with spaghetti code.
Separate code bases using common, configurable modules.
This could contain anything you decide is unlikely to differ across regions: Nuget packages with core logic, npm packages with javascript, front end styling, etc.
Pros:
if done right you can get the best of both worlds - separate release pipelines and separate (simple) region specific code
you can make a change to the common module and decide when/if to update each region to the newest version separately
Cons:
more infrastructure effort - you need a release pipeline per app and one per each package
debugging and understanding packaged code when used in an app is tricky
changing something in common module and testing it in your app is a pain - you have to go to the common repository, make a change, test it, create a PR, merge it, wait for the package to build and get released, upgrade in your app... and then you discover the change was wrong.
I've worked with such projects and there are always problems - if you make it super configurable it becomes unreadable and overengineered. If you make it separated you have to make changes in many places and maintaining things like unit tests in many places is a nightmare.
Since you already started with approach 1 and since you mentioned other regions are coming, I'd suggest going with your current strategy and slowly abstracting common pieces to separate repos (moving towards 3rd approach).
I think the most important piece that will make such changes easier is a decent level of test automation - both for your apps and for your common modules when you create them.
One piece of advice I can give you is to be pragmatic. Some level of duplication is fine, especially if the alternative is a complex rule engine that no one understands and no one wants to touch because it's used everywhere.

What comes between the vision and a product backlog in scrum? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Not looking really for a book. I have seen lots of references and links to them. I can't buy one right now. I have been reading online, watching videos, etc. One thing so far I don't get. What comes between the vision (solution to the problem) and the product backlog. From what I read, I think it is user stories but I am not sure.
Is there anything online that shows all the steps in a linear fashion from vision/concept to the end?
Thank you for any direction.
EDIT: On the requirements gathering, just use Excel?
User stories and a lot of negotiation over what's essential and what's fluff.
A lot of negotiation.
Also a lot of back-and-forth on architecture. Scrum requires a stable, proven architecture. However, there are always upgrades and enhancements. How do those fit with the backlog? That's a lot of political jockeying between product owner, technology folks and (to an extent) users/buyers.
The process in inherently non-linear.
It's more like crystallization. You have a solution, you start to write stories, you have a technology vision, you have a team with certain skills and experience.
Any one of these features can serve as a "nucleus" for deciding what goes into the backlog and in what order. Eventually, something becomes the nucleus and the mixture crystallizes. Sometimes the cost or schedule or risks are unacceptable, so you heat it back up, find another nucleus and see if it crystallizes acceptably around that new nucleus.
The recrystallization happens after each sprint, by the way, making it even less linear.
Edit. "stable proven architecture".
Question: Who pays for learning the new architecture?
Answer: Ha ha. There's no good answer. So be careful how much architectural learning you do while you have development sprints going on.
If you don't have an architecture in place that (a) works, and (b) can be articulated by almost everyone on the team, you're going to spend time assembling that architecture.
What does the time and cost of creating an architecture do to your first sprint?
You have to incorporate architecture development into the first sprint, delaying things.
Let's say you decide to implement a LAMP stack. You don't know whether to unix PHP, Perl or Python. So you pick one. Like Python. And you promise the first sprint in four weeks. So you work for 3 weeks struggling with the kabillion add-one modules and frameworks. After 3 weeks, you think you have a working tech stack, but you don't have the promised sprint.
Do you delay? If so, everyone asks if you've got the pace right and starts doubling the time for all other sprints.
Do you deliver nothing? If so, what's the point of sprints if you have nothing at the end except infrastructure?
You can change, modify and enhance the infrastructure -- in manageable pieces. But to build a fresh architecture, prove out the pieces, train everyone and develop best practices takes time. A lot of it. And that time shouldn't -- really -- be charged as sprint time creating deliverable product. That's overhead time.
Edit. Tooling.
Rule 1. Agile processes don't use a lot of complex tools and processes. That's why I said that the process is a lot of "negotiation". Whatever makes you productive.
Rule 2. Don't over think it. Just do it.
Most folks say -- in the strongest possible way -- use 5"x8" paper cards and stick them to a wall. Seriously. No tools. Just simple paper, markers, tape and blank wall space.
Read this: http://www.agilemodeling.com/artifacts/userStory.htm
You can use a spreadsheet to collect user stories (and epics -- stories that have to be decomposed). You can add columns for complexity (story points), cost, priority and release, and use it for project management.
We use use cases (not user stories) but the tooling is the same. A use case is -- in a way -- a user story with more details up front. But the use case name can summarize how an actor interacts with a system; the interaction can usually be summarized with clear, simple nouns and a verb, which is very much like a user story.
Spreadsheets seem handy because you can rearrange the rows at the end of each sprint. You can do simple counts and sums to work out how much each feature will cost and when they'll arrive.
I don't use a spreadsheet because -- in spite of the GUI glitziness -- I find it a little bit cumbersome. I would feel it necessary to write a spreadsheet extractor that would turn the backlog from an Open Office Org file into ReStructuredText (RST). I prefer RST -- plain text markup -- over spreadsheets.
This is all protracted negotiation. Everything changes as a result of every conversation. That's the point of an Agile method. Quick sprint followed by negotiation over the direction of the next sprint.
Our backlog is a big RST document. We publish all our documentation using Sphinx and it's very, very simple to write backlog, use cases, architecture, design, etc., in RST markup.
Our sprints are simply sections of a big document tree. They're decorated with a few special-purpose interpreted text fields for the subjective things like estimated completion date, and status (in process, released).
What comes between the vision (solution to the problem) and the product backlog.
Nothing. From the Vision, you just create the Product Backlog (PB). Note that the Product Backlog Items (PBI) don't need to be all fine grained, only the most emergent items need to. So, don't hesitate to create coarse grained items at the start, you'll decompoose them into fine grained PBI "just in time" (this activity is known as backlog grooming).
With these 2 artifacts, you can start your project. As Ken Schwaber puts it: "The minimum plan necessary to start a Scrum project consists of a vision and a Product Backlog. The vision describes why the project is being undertaken and what the desired end state is." (Schwaber 2004, p. 68)
From what I read, I think it is user stories but I am not sure.
To be honest, I'm not sure that I'm following you here. The PB is by definition a list of PBIs and creating the PB thus means feeding it with PBIs. Now, User Stories are just one possible formalism for the PBIs (Scrum doesn't force you to use User Stories, they are not appropriate for all projects) so, if you decide to use this formalism, creating the PB will mean creating User Stories.
Is there anything online that shows all the steps in a linear fashion from vision/concept to the end?
Below, one of the oldest illustration of the Scrum framework:
On the requirements gathering, just use Excel?
This would be my recommendation. If you need a sample, maybe have a look at Henrik Kniberg's Index Card Generator. More templates and/or samples at Scrum backlog templates and examples.
The backlog comes after the requirements are defined. The backlog is in a constant state of flux, but ultimately it is the work left to be completed.
Here is a chart: link
You can start by breaking the vision down into a series of epics. These can then live in your backlog as a prioritised list of the "big rocks" of work that need to get donw.
As you start planning each sprint (or a little before), you can break down the epics into user stories and prioritise them.
Google 'user story mapping'. This is a great way to understand a problem from a functional/feature view, and it's the technique I recommend to people who want to build a product but don't know where to start. The input is the vision statement and the output is a prioritized product backlog, plus the model itself.

Kanban as a Software Development Process in Practice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Has anyone used the kanban method for software development management?
I am evaluating kanban as a technique and would be curious to hear from anyone who has actually applied it in practice as to how effective it is. I've seen questions like: is-anyone-using-kanban, kanban-vs-scrum, and apply-kanban-in-an-agile-team but they don't address my concerns.
What I'm interested in specifically is:
Does it actually offer the advantages is claims in terms of dynamically identifying bottlenecks?
Is it easy to execute in practice, or does it have logistical challenges that you need to manage?
Does it scale well to project teams with many parallel streams of work and many developers?
How does it compare to critical path analysis (as implemented in MS Project), how is it different?
What other benefits can be gained from applying kanban?
Thanks.
The Kanban method is foremost a catalyst for continuous process improvements. It’s not a quick fix or a fixed set of steps/practices. The method has a few foundational principles and core properties, as described in David J Andersons recent blog post, that can lead you the way to continuous process improvements.
To your questions:
The Kanban method in itself does not identify bottlenecks. When implementing work-in-progress limits to a process that creates stress on your process you will eventually create a pull system and then it becomes easier to identify bottlenecks in your process. Tools like a visual kanban board and Cumulative Flow Diagrams will help you identify the bottlenecks in the process.
If you apply the foundational principles and core properties and you have the stamina/patience/dedication it is not too hard. You need to manage the change process as with every organizational change but the Kanban Method is designed to make small and non-threatening changes.
Yes there are many documented cases of this.
The Kanban method does not identify a specific method for planning and projecting future deliveries. David J Anderson has a background in Theory of Constraints and uses TOC as a model in most of the writing I have read. I think the practical difference between using MS Project style big up front planning and the empirical based planning used in many kanban implementations is the big difference. When working with a project plan designed in MS Project in the beginning of a project you know very little of the actual problem domain and you make assumptions. Based on these assumptions you device a plan. A critical path is calculated based on these assumptions. With a stable kanban system and you use TOC as your model you plan “only” to have your constraint/bottleneck on the critical path. You take into account the historical variability of the work passing the constraint and you create buffers around you bottleneck with the appropriate risk you want to take. The thinking is that every hour lost at the bottleneck will be an hour lost for the whole system.
The main benefit of the Kanban Method is that it is a catalyst for continuous process improvements. It starts with what you got and makes non-threatening changes that sticks. Kanban is a method that is Made to Stick
In the article Applying Kanban to PC Deployment the Team has to account for the following equipment:
160 new PCs to be deployed
40 new laptops to be deployed
120 PCs and 10 laptops to be refreshed and redeployed
... we are exploring the use of Kanban to manage a short-term functional
project. This example focuses on using Kanban to create a transparent
process to track the flow of equipment through a number of complex
steps, without incurring additional costs for tracking software,
complex processes and training, or duplication of effort. Improved
uniformity or quality of the deployment process will also help improve
efficiencies in troubleshooting and repair times as well as ensure a
document-ably high level of conformance to software and licensing
standards.
The page above has also links to Kanban applied ...
to Tech Support
to PC Deployment (see Quote above)
to a Development Group
Challenges, Additional Concepts, and Wrapup
I also don't have a lot of experience with it, but I think I can offer some insight.
1 & 4: the main difference between Kanban boards and other techniques, like CPM, is that a Kanban board, in a correct implementation, forces you to impose work-in-progress limits. This creates a pull system, since new items are accepted by workers only when they have capacity. This differs from an MS project type project where tasks are assigned to workers before-hand (i.e. pushed).
It is much easier to identify a bottleneck in a pull system, because work items will be queuing up at some stage in the process. In a push system, work is pushed through the system (whether it is 'done done' or not), so its difficult to find bottlenecks.
Another advantage of a pull system is you can start to base work timelines on actual results (lead and cycle time), as opposed to prediction. Yes, the size and granularity of stories does affect this, but with techniques such as cumulative flow diagrams this becomes less important.
2: Most implementations are pretty simple, and therein lies some of the strength of the technique. I think if you're having problems with the logistics of the technique, you're doing it wrong. Have a look here for a nice 'kickstart example'.
Few definitions to focus on before jumping onto the differences:
Agile – A structured and iterative framework to track and manage projects. This approach is used in managing software development projects. It allows cross-functional teams to collaborate on users expectations.
Kanban – A framework which utilizes visualization technique, limiting the number of tasks to be taken in “Work in Progress” column. The segregation of a similar type of tasks can be done here. To simplify it, allocate colors to tasks using the swim lanes.
Scrum – The approach followed here is breaking down a complex task into simpler smaller manageable pieces which are easy to collaborate upon by the respective owners of the [scrum][1].
Similarities between Kanban and Scrum
Frameworks of agile methodologies
Used to track the progress of the project
Provide the team transparency in tracking the work progress
Make use of visualization
Differences between Kanban and Scrum
Roles – Scrum is dependent on the scrum owners and is worked upon by them respectively. Kanban is independent of cross-functional team members and parallel roles.
Release cycle – Scrum makes use of sprints whose duration varies from one week to two weeks. The user stories are then taken up for development, testing and bug fixes. Kanban does not follow any cycle and the process is continuous in nature.
Tracking parameters – Scrum makes use of velocity in planning upcoming sprints taking into account the complexity and number of user stories completed in the previous sprint. Kanban ensures limiting of user stories in “Work in Progress” column to avoid bottlenecks. It tracks the time taken to finish a task from the starting to the end.
The scope of improvement – Scrum does not encourage changes in ongoing sprints. Kanban is open to any changes before the completion of the project. It is flexible in nature.
Fit factor – Scrum is suitable for projects with clearly defined user stories. Acknowledgement on the same by the client for timely completion of the project makes it a fit. Kanban being flexible in nature allows variations in priorities on the basis of the current scenario.
Pick process – Scrum picks the entire batch of user stories from the product backlog for development. Kanban follows the maximum number of tasks allowed in the columns to maintain the sanity of the framework and to avoid bottlenecks.
Delivery – Scrum follows delivery based on sprint planning and prioritize based on the specifications given by the client.Kanban follows the continuous delivery model based on business needs.
The above points are easy to remember if you are able to visualize working on them. Ideally where the scrum follows a rather predefined set of principles. Kanban is backed up by the principle of flexibility. It allows you to track tasks that are of utmost importance for delivery.
I don't have specific experience with using Kanban in software but I am familiar with the practice from a manufacturing point of view, so I was curious as to the implementation. Reading your link, the thing that struck me as a possible hitch is what felt like an underlying assumption about the same-sized ness of the units of work (features, stories, whatever). While keeping things "story sized" is a good goal, there are often mixes of bigger and little stories floating around, and the small number constraints in their pipeline therefore seem artificial. If the goal is to highlight bottlenecks, I would think that stand ups and sprint planning and retrospectives will do that well enough. If the goal is to facilitate prioritization, I think that putting constraints on numbers of tasks by types wouldn't do that as well as simply ordering them top to bottom.
I guess I don't really see what value its adding; that being said, I don't see the harm in trying it and adopting whatever pieces work.
1. Does it actually offer the advantages is claims in terms of dynamically identifying bottlenecks?
This has been my experience. By setting your WIP limits to reflect available capacity, if there is work that needs to make use of that capacity but has to wait for it to become available then you will see it as the queue of work backs up ahead of the bottleneck. I have seen this happen when there is an overworked QA team with an upstream development team who keep producing even though there is no chance of it getting looked at by QA. The solution we took to this was to lend some of the developers to the QA team who then helped alleviate the bottleneck.
2. Is it easy to execute in practice, or does it have logistical challenges that you need to manage?
This will depend on many factors which will be specific to the context to which you are applying it. One of the great strengths of Kanban is it does not require an immediate 'all or nothing' change from how you are currently working. The chapter 'A Recipe for Success' in David Anderson's 'Kanban' book gives a great way of approaching the change, starting with 'Focus on Quality'
3. Does it scale well to project teams with many parallel streams of work and many developers?
In the project where I first used it we ended up with a team of seventeen developers and we had moved the QA team of four into our team too. We also had lots of external dependencies which we used Kanban to model.
4. How does it compare to critical path analysis (as implemented in MS Project), how is it different?
Pass as have never used
5. What other benefits can be gained from applying kanban?
There are many but the one I will call out is that it gives you metrics that are genuinely useful for discussing and steering the work, both with the team and with stakeholders and other people outside the team. Specifically the use of 'Throughput' and 'Cycle Time' help give you a probabilistic for cast of when work will get done.

What steps make up your web development process and how much time does each phase take?

Let's say you work 100 days on a project. How many days would each phase of your process (requirements analysis, specification, etc.) take?
I'm interested also in the ratio of specific activities in every phase, such as writing tests, back-end coding, front-end coding, visual design, database design etc.
Many thanks!
EDIT:
Just to make things clear, I'm not talking about web site design - I'm interested in more "serious" web development, such as custom business web applications. I know, everything depends on the specifics of each project, however I suppose the ratios could be roughly the same from project to project.
EDIT2:
As Helen correctly remarked, this question is really hard to answer, since projects can be so different and so can be teams. To make it more specific, let's say you have a team of four developers - two of them for back-end work, one for front-end programming and one for design & html/css coding (one member of the team acts as a project manager) and you are supposed to develop StackOverflow.com site.
We're running agile scrum projects, so we typically run all these activities in parallel. So while I cannot answer your exact question, I can give you some ideas of the ratios we have found to be effective:
4-5 developers can be served by one client side programmer (html/css), one on-team tester and one interaction designer (works with the customer to design wireframes). A team like this typically needs a 50% graphic designer for most applications, but your mileage may vary there. Then there's project manager, and there's all sorts of other stakeholders that are not part of the core development team.
In the development team you normally have a couple of developers who are sharp on client side development and similar amount on the back-end. These staffings also tend to reflect resource usage ;) Testing is an integral part of development as well as the efforts of the on-team tester.
Your local conditions may of course vary, but these numbers are just to give you some idea.
Step 1: denial
Step 2: anger
Step 3: acceptence
The time each step takes is different for all team members involved.
I agree with everyone who started along the lines of, "It depends on the project".
On the other hand, I do think that there's a consistent process that can be followed; only tweaking the percentages of effort to match the project:
Typically, I follow these basic principles:
Discovery - determine the feature/functionality of the system. The easiest (and worst) thing to do is accept what's being asked for and go with it.
For example, "building stackoverflow.com" is a pretty broad request - and is actually the wrong request. The project has to start with "I need an online location where programmers can collaborate".
Based on that one thing you're trying to solve, you can drill down into all the details you want - like how a question will be answered, asked, rated, etc.
I think this is the most crucial step! output = requirements/specification; 20/100 days can safely be spent here
Wireframing - this is where I like to use basic HTML pages, paint.NET, or even construction paper and glue to mock up every aspect of the final site's functionality. I like using paper because it's easy to make changes :)
Going through this process forces you to consider just about every aspect of the user experience and gives you the flexibility to add/remove features and adjust your requirements as needed. Your customer has some input to changes before you've committed a bunch of time to writing code.
An added bonus is that you get to use paste :)
10/100 days
Implementation/Testing - I group implementation AND testing together because I think it's short sighted to develop a whole site without testing along the way. (At the same time, you still need step 4). This is the part where the rubber hits the road. If you've handled your client properly in steps 1 and 2, you'll be pleasantly writing your code without any last-minute changes in scope (or at least very few). I try to follow a general set of steps for implementation:
data develpment (db design, query design, sample data setup)
site framework (set up your environment(s); production, dev, and qa)
front-end structure (css, standard classes, standard html structures)
start coding!
55/100 days
SQA - hopefully you can get some non-involved parties/end users to test the app out as you go. Test plans need to be developed to ensure that it's clear what should be test and the desired outcomes. I like using real people for testing the front end; automated tools are fine for code/backend modules
This is a good time to let the client see things progressing - they should have very limited ability to make changes at this point.
10/100 days
Delivery/Post Production honeymoon - you've built it, tested it, and you're ready to deploy. Get the code out there and let the client play. You shouldn't have much to tweak; but I'm sure there will be some adjustments.
5/100 days
Some of this seems idealistic; but you'd be surprised how quickly you can ship your application when you've got a well-reviewed, well-created specification.
It is impossible to give a meaningful answer to this question. The ratios will not be even roughly the same from project to project. For some projects the visual design barely matters (as long as it more or less works) but the database is critical and complex. For others, it's all about providing a smooth user experience with lots of AJAX goodies and other eye candy, but the underlying data is trivially simple to organise and store.
It sounds like you're thinking mainly of one-man projects, but for larger teams the size and setup of the team also matters, as well as your development process.
Probably we are an unusual development shop. Our whole existence (at least during work hours) is requirements gathering. Developers are required to als work in every other department. Be it answering the phone in after sales support (and fighting the CRM Software), driving a forklift in the warehouse (and fighting the mobile terminals) or packing crates in the shipping station (and fighting confusing delivery notes).
When we tackle a new project "requirements gathering" was usually an afternoon on the whiteboard, usually with somebody from the department which used the new software most. There was little upfront design and lots of re-factoring and rewriting. We where very happy with this and generated about 100.000 Lines of Code which are well-architected and stable.
But it seems we are hitting a complexity barrier now. This is very frustrating because moving to "heavier" processes than hack and slay coding results in a dramatic loss of productivity.
Just to be clear - you're basically time-boxing your work - which is directly relational to having a fixed budget (4 developers x $x per day x 100 days - assuming it's 100 day duration and not 100 day work effort). If that's the case then, on avg. you would spend:
25% up front planning which includes scope, spec development, technology approach, logistics (computers, servers, work space), resource gathering.
50 % development - test case (TDD) development, schema design and implementation, front end coding, backend coding, deployment
15% Testing - basic break/fix activities
10% overhead/management - project management, communication and coordination.
Very rough est. - many 'areas' to consider including resource skills/maturity, technology being used, location of resources (one room or across the country), level of requirements, etc. The use of 'skill specific' resources would make planning more difficult since you might need the resources to perform multi-roles - one suggestion would be to get 3 generalists who can help spec/design/plan and one tech wizard that would ensure the platform and database are setup correctly (key to success once you have good as possible requirements)
That is truely a tricky questions. To give an somewhat exact estimation on the ratio of time you need to apply for each step - if we take a classical approach of design, implement, test and deploy - one needs to know the specification and the expertise of the project members.
If you take McConnell's book "Software Estimation" (which I highly recommend) you have a chapter in their about historical data and how to use that for future projects.
I do not think, that you have exact historical data of former projects - well - I don't have one - although I always remind me of recording them ;)
Since the smallest failures or uncertainties in the design-phase are the most crucial ones take a lot of time to specify what you want to do. Make sure that everyone understands it the same way and write it down.
To cut a long story short - I'd put 50% - 75% of the time in the design (if 75% this would include a prototype to clear all uncertainties) and equal parts in implementation and test.
If you are using TDD you would mix design and test a bit so you would take a bit of the design-phase and add it to the test-phase.
Building a list of client needs 1-2 days
This depends on the client and what they need and how well prepared they are.
Designers do initial sketch ups 2-3 days
A bit of branching happens here as 2 and 3 will happen concurrently.
Programers build any functionality not already in our existing system 1day - 1 month
This depends on the client, and what they need more then most anything else.
This also will only produce functional code.
Repeat steps 2&3 until the client is happy with the general feeling of what we have.
Could be 1 iteration could be 100 (not likely if by 10 we couldn't make them happy we'd send them somewhere else.
Build final design 1-5 days
This is the final, no error, valid CSS/HTML/JS, everything is cross browser ect
Build final functionality 2-3 days
This code is "perfect" it works 100%, it is pretty, there are no known bugs, and the developers are happy to send it
This and Step 5 happen concurrently.
Deploy 10 seconds.
Then 2weeks, 2 months and 6 months later we do a review to make sure there have been no problems.
So if you skip the review this usualy takes 8-20 days, IDK how you'll work that into 100 days.
If we are just building an application (or extending one) for a client we would spend 2-3 defining EXACTLY what they need then however long it takes to build it.

I need this baby in a month - send me nine women!

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
The proposed individuals to be added to the project must have:
At least a reasonable understanding of the problem domain of the project
Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
Have good communication skills
Be highly motivated (e.g. be able to work independently without prodding)
The existing team members must have:
Excellent communication skills
Excellent time management skills
The project lead/management must have:
Good prioritization and resource allocation abilities
A high level of respect from the existing team members
Excellent communication skills
The project must have:
A good, completed, and documented software design specification
Good documentation of things already implemented
A modular design to allow clear chunks of responsibility to be carved out
Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
Were late before you started (more
stuff than time) and/or
slipped 1hr, 1day at time.
Hope that helps!
It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
Maybe if the following conditions apply:
The new programmers already understand the project and don't need any ramp-up time.
The new programmers already are proficient with the development environment.
No adminstrative time is needed to add the developers to the team.
Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
If the new people focus on testing
If you can isolate independent features that don't create new dependencies
If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
I suppose the adding people toward the end of the work could speed things up if:
The work can be done in parallel.
The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
Self-contained modules that have yet to be started
Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.

Resources