Having one code base for multiple regions and clients - user-interface

Context:
We built a data-intensive app for US region for just one client using ASP.NET MVC and now we are slowly moving to ASP NET Core. we have a requirement to develop similar version for Canada, our approach was to maintain two different code bases even though the UI is 70% same.
Problem:
Two code bases seems maintainable but were ending up doing double work if a generic component has to changed. Now we have multiple clients coming from multiple regions and UI can be a little different by client and region and we are bit confused on how to architect the such an app with just once code base.
I am not sure on what would be a maintainable and scalable approach.
One approach is having an UI powered by rules engine that is capable of showing and hiding the components. How maintainable is this approach in deployments perspective?
what would be other approaches to solve this problem?

The main approaches I can think of are:
Separate code bases and release pipelines. This seems to be your current approach.
Pros:
independent releases - no surprises like releasing a change to Canada which the other team made for US
potentially simpler code base - less configuration, fewer "if (region == 'CANADA')..."
independent QA - it's much simpler to automate testing if you're just testing one environment
Cons:
effort duplication as you've already noticed
One code base, changes configuration driven.
Pros:
making a change in one place
Cons:
higher chance on many devs working on the same code at the same time
you're likely to end up with horrible 'ifs'
separating release pipelines can be very tricky. If you have a change for Canada, you need to test everything for US - this can be a significant amount of effort depending on the level of QA automation and the complexity of your test scenarios. Also, do you release US just because someone in Canada wanted to change the button color to green? If you do then you waste time. If you don't then potentially untested changes pile up for the next US release.
if you have other regions coming, this code quickly becomes complex - many people just throw stuff in to make their region work and you end up with spaghetti code.
Separate code bases using common, configurable modules.
This could contain anything you decide is unlikely to differ across regions: Nuget packages with core logic, npm packages with javascript, front end styling, etc.
Pros:
if done right you can get the best of both worlds - separate release pipelines and separate (simple) region specific code
you can make a change to the common module and decide when/if to update each region to the newest version separately
Cons:
more infrastructure effort - you need a release pipeline per app and one per each package
debugging and understanding packaged code when used in an app is tricky
changing something in common module and testing it in your app is a pain - you have to go to the common repository, make a change, test it, create a PR, merge it, wait for the package to build and get released, upgrade in your app... and then you discover the change was wrong.
I've worked with such projects and there are always problems - if you make it super configurable it becomes unreadable and overengineered. If you make it separated you have to make changes in many places and maintaining things like unit tests in many places is a nightmare.
Since you already started with approach 1 and since you mentioned other regions are coming, I'd suggest going with your current strategy and slowly abstracting common pieces to separate repos (moving towards 3rd approach).
I think the most important piece that will make such changes easier is a decent level of test automation - both for your apps and for your common modules when you create them.
One piece of advice I can give you is to be pragmatic. Some level of duplication is fine, especially if the alternative is a complex rule engine that no one understands and no one wants to touch because it's used everywhere.

Related

Differences between Agile, Incremental and Waterfall development model?

What is the key difference between agile and Incremental and waterfall models?
As a beginner software developer what model should I follow?
I need to be clear.
In addtion to the Gishu's answer
Incremental - you build as much as you need right now. You don't over-engineer or add flexibility unless the need is proven. When the need arises, you build on top of whatever already exists. (Note: differs from iterative in that you're adding new things.. vs refining something).
Agile - you are agile if you value the same things as listed in the agile manifesto. It also means that there is no standard template or checklist or procedure to "do agile". It doesn't overspecify.. it just states that you can use whatever practices you need to "be agile". Scrum, XP, Kanban are some of the more prescriptive 'agile' methodologies because they share the same set of values. Continuous and early feedback, frequent releases/demos, evolve design, etc.. hence they can be iterative and incremental.
Waterfall involves discrete development stages: specification,
design, implementation, testing, and maintenance. In principle, one stage must be
complete before progress to the next stage is possible.
Selecting a process is difficult sometimes.Choosing the right Software development life cycle model Read this article it is helpful.
Waterfall is sequential while agile is an incremental approach.
Waterfall: Conception, initiation, analysis, design, construction, testing, implementation and maintenance. All eight steps will be done in sequential manner (one after another). Once a step is completed, you can’t go back to the previous step. If you make a little change, the whole project will start from zero. So, there's no room for error or change.
When to use waterfall:
If the client has complete knowledge of what they want (size, cost & timeline of project), then go for waterfall.
Advantages:
If any employer leaves the job, the new employer will suddenly grip down the project as all the steps are sequential, so new resources will easily understand the current situation of the project.
Client knows what the final product will look like.
Disadvantages:
If one step is completed, you cannot go back.
Waterfalls demand heavy initial requirements, false requirements will lead your project to somewhere else, not to destination.
If any error is found or any change needs to be made, the project has to start from the beginning.
The whole project is only tested at the end, if bugs are written early but discovered late, their existence may have affected how other code was written.
Agile: Developers start with simple design and then begin to work on small modules. The work on these modules is done on a weekly or monthly basis. After completion of a module, module sent to testing phase, and if any bug comes, then developer first removes that bug and then result is deployed in order to take client review, if client demands any change then first developer has to implement that change. At the end of each module, project priorities are evaluated, on which module we should start work.
When to use agile:
When rapid production is more important than the quality of the product.
When there's no clear picture of what the final product looks like.
Advantages:
Each module is tested after its completion that trains developers not to make such mistakes in the next module.
Agile allows developers and clients to add change at any time.
After each module, the client reviews the application, so the client knows about the progress of the project after each module.
Disadvantages:
It highly demands the successful project manager. As making modules, prioritizing them and setting time period of a module really requires a lot of experience.
As no one has initial requirements, so the final product can be grossly different than what was initially intended.

How to go about a large refactoring project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am about to start planning a major refactoring of our codebase, and I would like to get some opinions and answers to some questions (I have seen quite a few discussions on similar topics, such as https://stackoverflow.com/questions/108141/how-do-i-work-effectively-with-very-messy-legacy-code, Strategy for large scale refactoring, but I have some specific questions (at the bottom):
We develop a complex application. There are some 25 developers working the codebase. Total man years put into the product to date are roughly 150.
The current codebase is a single project, built with ant. The high level goal of the project I'm embarking on is to modularize the codebase into its various infrastructures and applicative components.
There is currently no good separation between the various logical components, so it's clear that any modularization effort will need to include some API definitions and serious untangling to enable the separation.
Quality standards are low - there are almost no tests, and definitely no tests running as part of the build process.
Another very important point is that this project needs to take place in parallel to active product development and versions being shipped to customers.
Goals of project:
allow reuse of components across different projects
separate application from infrastructure, and allow them to evolve independently
improve testability (by creating APIs)
simplify developers' dev env (less code checked out and compiled)
My thoughts and questions:
What are your thoughts regarding the project's goals? Anything you would change?
do you have experience with such projects? What would some recommendations?
I'm very concerned with the lack of tests - hence lack of control for me to know that the refactoring process is not breaking anything as i go. This is a catch 22, because one of the goals of this project is to make our code more testable...
I was very influenced by Michael Feathers' Working Effectively With Legacy Code . According to it, a bottom up approach is the way to solve my problem - don't jump head first into the codebase and try to fix it, but rather start small by adding unit tests around new code for several months, and see how the code (and team) become much better, to an extent where abstractions will emerge, APIs will surface, etc, and essentially - the modularization will start happening by itself.
Does anyone have experience with such a direction?
As seen in many other questions on this topic - the main problem here is managerial disbelief. "how is testing class by class (and spending a lot of time doing so) gonna bring us to a stable system? It's a nice theory which doesn't work in real life". Any tips on selling this?
Well I guess it's better now than later but you've definitely got a task ahead of you. I was once in a team of three responsible for a refactoring a product of similar size. It was procedural code but I'll describe some of the issues we had that will similarly apply.
We started at the bottom and started easing into it by picking functions that should have been highly reusable but weren't. We'd write a bunch of unit tests on the existing code (none existed at all!), but before long, we faced our first big problem--the existing code had bugs that had been laying dormant.
Do we fix them? If we do, then we've gone beyond a refactoring. So we'd log an issue with the existing code hoping to get a fixed and freshly tested code base, but of course management decided there were more important priorities than fixing bugs that had never surfaced. Understandable.
So we thought we'd try fixing the bugs in our new code. Then we discovered that these bugs in the original code made other code work, so really were 'conceptual bugs' rather than 'functional bugs'. Well maybe. There were occasional intermittent spasms in the original software that had never been tracked down.
So then we changed tack and decided to keep the bugs in place, as a true refactoring should do. It's easy to unintentionally introduce bugs, it's far harder to do it intentionally!
The next problem was that the code was in such as mess that the initial unit tests we wrote had to substantially change to cater for the refactoring. In other words, two moving targets. Not good. Just writing the tests was taking ages and lost us belief in the worthiness of the project. It really was something you just wanted to walk away from.
We found in the end we really had to tone down the extent of the refactoring if we were going to finish this millennium, which meant the codebase we dreamed of wouldn't be achieved. We declared that the most feasible solution was just to clean and trim the code and at least make it conceptually easier to understand for future developers to modify.
The reduced benefits of the limited refactoring was deemed not worth the effort by management, and given that similar reliability issues were being found in the hardware platform (embedded project), the company decided it was their chance to renew the entire product, with the software written from scratch, new language, objects. It was only the extensive system test specs in place from the original product that meant this had a chance.
Clearly the absence of tests is going to make people nervous when you attempt to refactor the code. Where will anybody get any faith that your refactoring doesn't break the application? Most of the answers you'll get, I think, will be "this is gonna be very hard and not very successful", and this is largely because you are facing a huge manual task and no faith in the answer.
There are only two ways out.
Build a bunch of tests. Unfortunately, this will cost a lot of time and most managers don't see any value; after all, you've gotten along without them so far. Pointing back to the faith question won't help; you're still using a lot of time before anything useful happens. If they do let you build tests, you'll have the problem of evolving the tests as you refactor; they may not change functionality one bit, but as you build new APIs the tests will have to change to match the new APIs. That's additional work beyond refactoring the code base.
Automate the refactoring process. If you apply trustworthy automated transformations, you can argue (often unsuccessfully) that the refactored code preserves the original system function. The way to beat the unsucessful argument is to write those tests (see first method) and apply the refactoring process to the application and the tests; as the application changes structures, the tests have to change too. But they are just application code from the point of view of automated machinery.
Not a lot of people do the latter; where do you get the tools that can do such things?
In fact, such tools exist. They are called program transformation tools and are used to carry out massive transformations on code.
Think of these as tools for literally refactoring in the large; because of scale,
they tend not to be interactive.
It does take effort to configure them for the task at hand; you have to write custom rules to accomplish your custom desired result. You likely can't do this in a week, but this is a lot less work than manually modifying a large system. And you should consider that you have 150 man-years invested in the existing software; it took that long to make the mess. It seems reasonable that "some" effort small in comparison should be OK.
I only know of 3 such tools that have a chance of working on real code: TXL, Stratego/XT, and our tool, the DMS Software Reengineering Toolkit. The first two are academic products (although TXL has been used for commercial activities in the past); DMS is commercial.
DMS has been used for a wide variety of large-scale software anaysis and massive transformation tasks. One task was automated translation between languages for the B-2 Stealth Bomber. Another, much closer to your refactoring problem, was automated architecting of a large-scale component-based system C++ for componentts, from a legacy proprietary RTOS with its idiosyncratic rules about how components are organized, to CORBA/RT in which the component APIs had to be changed from ad hoc structures to CORBA-style facet and receptacle interfaces as well as using CORBA/RT services in place of the legacy RTOS services. (These tasks were both done with 1-2 man-years of actual effort, by pretty smart and DMS-savvy guys).
There's still the test-construction problem (Both of these examples above had great system tests already).. Here I'm going to go out on a limb. I believe there is hope in getting such tools to automate test generation by instrumenting running code to collect function input-output results. We've built all kinds of instrumentation for source code (obviously you have to compile it after instrumentation) and think we know how to do this. YMMV.
There is something you do which is considerably less ambitious: identify the reusable parts of the code, by finding out what has been reused in the code. Most software systems contain a lot of cloned code (our experience is 10-20% [and I'm surprised by the PHP report of smaller numbers in another answer; I suspect they are using a weak clone detector). Cloned code is a hint of a missing abstraction in the application software. If you can find the clones and see how they vary, you can pretty easily see how to abstract them into functions (or whatever) to make them explicit and reusable.
Salion Inc. did clone detection and abstraction. The paper doesn't explore the abstraction activity; what Salion actually did was a periodic review of the detected clones, and manual remediation of the egregrious ones or those that made sense into (often library) methods. The net result was the code base actually shrank in size and the programmers became more effective because they had better ("more reusable") libraries.
They used our CloneDR, a tool for finding clones by using the program syntax as a guide. CloneDR finds exact clones and near misses (replacement of identifiers or statements) and provides a specific list of clone locatons and clone paramerizations, regardless of layout and comments. You can see clone reports for a number of languages at the link. (I'm the originator and author of CloneDR among my many hats).
Regarding the "small clone percentage" for the PHP project discussed in another answer: I don't know what was being used for a clone detector. The only clone detector focused on PHP that I know is PHPCPD, which IMHO is a terrible clone detector; it only finds exact clones if I understand the claimed implementation. See the PHP example at our site for comparative purposes.
This is exactly what we've been doing for web2project for the past couple years.. we forked from an existing system (dotproject) that had terrible metrics like high cyclomatic complexity (low: 17, avg: 27, high: 195M), lots of duplicate code (8% of overall code), and zero tests.
Since the split, we've reduced duplicate code (2.1% overall), reduced the total code (200kloc to 155kloc), added nearly 500 unit tests, and improved cyclomatic complexity (low: 1, avg: 11, high: 145M). Yes, we still have a ways to go.
Our strategy is detailed in my slides here:
http://caseysoftware.com/blog/phpbenelux-2011-recap - Project Triage & Recovery; and here:
http://www.phparch.com/2010/11/codeworks-2010-slides/ - Unit Testing Strategies; and in various posts like this one:
http://caseysoftware.com/blog/technical-debt-doesn039t-disappear
And just to warn you.. it's not fun at first. It can be fun and satisfying once your metrics start improving but that takes a while.
Good luck.

Which are the advantages of splitting the developer's time between two projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two projects, with identical priorities and work hours demand, and a single developer. Two possible approaches:
Deliver one project first.
Split the developer's time and deliver both later.
I can't see any reason why people would choose the second approach. But they do. Can you explain me why?
It seems to me that this decision often comes down to office politics. One business group doesn't want to feel any less important than another, especially with identical priorities set at the top. Regardless as to how many different ways you explain why doing both at the same time is a bad idea, it seems as though the politics get in the way.
To get the best product to the users, you need to prevent developer thrashing. When the developers are thrashing, the risk of defects and length of delivery times begin to increase exponentially.
Also, if you can put your business hat on, you can try to explain to them that right now, nobody is getting any value from what the completed products will deliver. It makes more sense for the business to get the best ROI product out the door first to begin recouping the investment ASAP, while the other project will start as soon as the first is finished.
Sometimes you need to just step away from the code you have been writing for 11 hours in order to stay maximally productive. After you have been staring at the minutiae of a system you have been implementing for a long time it can become difficult to see the forest for the trees, and that is when you start to make mistakes that are hard to un-make.
I think it is best to have 2-3 current projects; one main one and 1-2 other projects that aren't on such a strict timeline.
If both projects have the same priority for the company, one obvious reason is for project managers to give higher management the illusion that both of the projects are taken care of.
Consider that the two projects could belong to different customers (or be requested by different people from higher management).
No customer wants to be told to wait while a different customer's project is given priority.
"We'll leave the other one for later" is, a lot of times, not an acceptable answer, even though this leads to delays for both projects.
I believe this is related to the notion of "Perceived Responsiveness" in a software program. Even if something takes more time to do, it looks faster when it appears to be doing something, instead of idly waiting for some other stuff to complete.
It depends on the dependencies involved. If you have another dependency upon the project that can be fulfilled when the project is not 100% complete, then it may make sense to split the developer's time. For example, if your task is large, it may make sense to have the primary developer do a design, then move on to a second task while a teammember reviews the design the primary developer came up with.
Furthermore, deserializing developers from a single task can help to alleviate boredom. Yes, there is potentially significant loss in the context switch, but if it helps keep the dev sharp, it's worth it.
if you go by whats in the great and holy book 'peopleware', you should keep your programmer on one project at a time.
the main reason for this is that divided attention will reduce productivity.
unfortunately, because so many operational managements are good businessman rather then good managers, they may think that multitasking or working on both projects somehow means more things are getting done (which is impossible, a person can only physically exists in one stream of the space-time continuum at one time).
hope that helps :)
LM
I think the number 1 reason from a management standpoint is for perceived progress. If you work on more than one project at the same time stakeholders are able to see progress immediately. If you hold one project off then the stakeholders of that project may not like that nothing is being worked on.
Working on more than 1 project also minimizes risks somewhat. For example if you work on one project first and that project takes longer than expected you could run into issues with the second project. Stakeholder also most likely want their project done now. Holding one off due to another project can make them reconsider going ahead with the project.
Depending on what the projects are you might be able to leverage work done in one for the other. If they are similar then doing both at the same time could be of benefit. If you do them in sequence only the subsequent projects can benefit from the previous ones.
Most often projects are not a constant stream of work. Sometimes developers are busy and sometimes not. If you only work on 1 project at a time a developer and other team members would likely be doing nothing while the more 'administrative' tasks are taking place. Managing the time over more than one project allows teams to get more done in a shorter timeframe.
As a developer I prefer working on multiple projects as long as the timelines are reasonable. As long as I'm not being asked to do both at the same time with no change in the schedule I am fine. Often if I'm stuck on one project I can work on the other. It depends on the projects though.
I'd personally prefer the former but management might want to see progress in both projects. You might also recognise inaccurate estimates earlier if you are doing some work on both, enabling you to inform the customer earlier.
So from a development perspective 1 is the best option but from a customer service point of view 2 is probably better.
It's managing your clients expectations; if you can tell both clients you are working on their project but it will take a little longer due to other projects then to say we are putting your project off till we finish this other project the client is going to jump ship and find someone that can start working on their project now.
It's a plaecbo effect - splitting a developer between two projects in the manner you've described gives people/"the business" the impression that work is being completed on both projects (at the same rate/cost/time), whilst in reality it's probably a lot more inefficient, since context switching and other considerations carries a cost (in time and effort).
On one hand, it can get the ball rolling on things like requirement clarifications and similar tasks (so the developer can switch to the alternate project when they are blocked) and it can also lead to early input from other business units/stakeholders etc.
Ultimately though, if you have one resource then you have a natural bottleneck.
The best thing you can do for that lone developer is to intercept people( from distracting that person), and try to carry some of the burdon around requirements, chasing clarifications and handling user feedback etc.
The only time I'd ever purposely pull a developer off their main project is if they would be an asset to the second project, and the second project was stalled for some reason. If allowing a developer to split a portion of their time could help jump-start a stalled project, I'd do that. This has happened to me with "expert" developers - the ones who have a lot more experience/specialized skills/etc.
That being said, I would try to keep the developer on two projects for as little time as possible, and bring them back to their "main" project. I prefer to allow people to focus on one task at a time. I feel that my job as a manager is to balance and shift people's priorities and focus - and developers should just develop as much as possible.
There are three real-life advantages of splitting developers' time between projects that cannot be ignored:
Specialisation: doing or consulting on work that requires similar specialised knowledge in both projects.
Consistency and knowledge sharing: bringing consistency into the way two separate products are built and work, spreading knowledge accross the company.
Better team utilisation: on a rare occasion when one of the projects is temporarily on hold waiting for some further input.
Splitting time between several projects is beneficial when it does not involve a significant change in context.
Having a developer to work single-handedly on multiple software development projects negates the benefit of specialisation (there isn't any in the case), consistency and knowledge sharing.
It leaves just the advantage of time utilisation, however if contexts differ significantly and there is no considerable overlap between projects the overhead of switching will very likely exceed any time saved.
Context switching is a very interesting beast: contrary to its name implying a discreet change the process is always gradual. There are various degrees of having context information in one’s head: 10% context (shallow), 90% (deep). It takes less time to shallow-switch as opposed to fully-switch; however there is a direct correlation between the amount of context loaded (concentration on the task) and output quality.
It’s possible to fill your time entirely working on multiple distinct projects relying on shallow-switching (to reduce the lead time), but the output quality will inevitably suffer. At some point it’s not only “non-functional” aspects of quality (system security, usability, performance) that will degrade, but also functional (system failing to accomplish its job, functional failures).
By splitting the time between two projects, you can reduce the risk of delaying one project because of another.
Let's assume the estimate for both projects is 3 months each. By doing it serially, one after the other, you should be able to deliver the first project after 3 months, the second project 3 months later (i.e. after 6 months). But, as things go in software development, chances are that the first project encounters some problems so it takes 12 months instead. Or, even worse, goes into the "in use, but never quite finished" purgatory. The second project starts late or even never!
By splitting resources, you avoid this problem. If everything goes well with the second project, you are able to deliver it after 6 months, no matter how well the first project does.
The real life situations where working on multiple projects can be an advantage is in the case where the spec is unclear (every time) and the customer is often unavailable for clarification. In those cases you can just switch to the other project.
This will cause some task switching and should be avoided in a perfect world, but then again...
This is basically my professional life in a nutshell :-)

What steps make up your web development process and how much time does each phase take?

Let's say you work 100 days on a project. How many days would each phase of your process (requirements analysis, specification, etc.) take?
I'm interested also in the ratio of specific activities in every phase, such as writing tests, back-end coding, front-end coding, visual design, database design etc.
Many thanks!
EDIT:
Just to make things clear, I'm not talking about web site design - I'm interested in more "serious" web development, such as custom business web applications. I know, everything depends on the specifics of each project, however I suppose the ratios could be roughly the same from project to project.
EDIT2:
As Helen correctly remarked, this question is really hard to answer, since projects can be so different and so can be teams. To make it more specific, let's say you have a team of four developers - two of them for back-end work, one for front-end programming and one for design & html/css coding (one member of the team acts as a project manager) and you are supposed to develop StackOverflow.com site.
We're running agile scrum projects, so we typically run all these activities in parallel. So while I cannot answer your exact question, I can give you some ideas of the ratios we have found to be effective:
4-5 developers can be served by one client side programmer (html/css), one on-team tester and one interaction designer (works with the customer to design wireframes). A team like this typically needs a 50% graphic designer for most applications, but your mileage may vary there. Then there's project manager, and there's all sorts of other stakeholders that are not part of the core development team.
In the development team you normally have a couple of developers who are sharp on client side development and similar amount on the back-end. These staffings also tend to reflect resource usage ;) Testing is an integral part of development as well as the efforts of the on-team tester.
Your local conditions may of course vary, but these numbers are just to give you some idea.
Step 1: denial
Step 2: anger
Step 3: acceptence
The time each step takes is different for all team members involved.
I agree with everyone who started along the lines of, "It depends on the project".
On the other hand, I do think that there's a consistent process that can be followed; only tweaking the percentages of effort to match the project:
Typically, I follow these basic principles:
Discovery - determine the feature/functionality of the system. The easiest (and worst) thing to do is accept what's being asked for and go with it.
For example, "building stackoverflow.com" is a pretty broad request - and is actually the wrong request. The project has to start with "I need an online location where programmers can collaborate".
Based on that one thing you're trying to solve, you can drill down into all the details you want - like how a question will be answered, asked, rated, etc.
I think this is the most crucial step! output = requirements/specification; 20/100 days can safely be spent here
Wireframing - this is where I like to use basic HTML pages, paint.NET, or even construction paper and glue to mock up every aspect of the final site's functionality. I like using paper because it's easy to make changes :)
Going through this process forces you to consider just about every aspect of the user experience and gives you the flexibility to add/remove features and adjust your requirements as needed. Your customer has some input to changes before you've committed a bunch of time to writing code.
An added bonus is that you get to use paste :)
10/100 days
Implementation/Testing - I group implementation AND testing together because I think it's short sighted to develop a whole site without testing along the way. (At the same time, you still need step 4). This is the part where the rubber hits the road. If you've handled your client properly in steps 1 and 2, you'll be pleasantly writing your code without any last-minute changes in scope (or at least very few). I try to follow a general set of steps for implementation:
data develpment (db design, query design, sample data setup)
site framework (set up your environment(s); production, dev, and qa)
front-end structure (css, standard classes, standard html structures)
start coding!
55/100 days
SQA - hopefully you can get some non-involved parties/end users to test the app out as you go. Test plans need to be developed to ensure that it's clear what should be test and the desired outcomes. I like using real people for testing the front end; automated tools are fine for code/backend modules
This is a good time to let the client see things progressing - they should have very limited ability to make changes at this point.
10/100 days
Delivery/Post Production honeymoon - you've built it, tested it, and you're ready to deploy. Get the code out there and let the client play. You shouldn't have much to tweak; but I'm sure there will be some adjustments.
5/100 days
Some of this seems idealistic; but you'd be surprised how quickly you can ship your application when you've got a well-reviewed, well-created specification.
It is impossible to give a meaningful answer to this question. The ratios will not be even roughly the same from project to project. For some projects the visual design barely matters (as long as it more or less works) but the database is critical and complex. For others, it's all about providing a smooth user experience with lots of AJAX goodies and other eye candy, but the underlying data is trivially simple to organise and store.
It sounds like you're thinking mainly of one-man projects, but for larger teams the size and setup of the team also matters, as well as your development process.
Probably we are an unusual development shop. Our whole existence (at least during work hours) is requirements gathering. Developers are required to als work in every other department. Be it answering the phone in after sales support (and fighting the CRM Software), driving a forklift in the warehouse (and fighting the mobile terminals) or packing crates in the shipping station (and fighting confusing delivery notes).
When we tackle a new project "requirements gathering" was usually an afternoon on the whiteboard, usually with somebody from the department which used the new software most. There was little upfront design and lots of re-factoring and rewriting. We where very happy with this and generated about 100.000 Lines of Code which are well-architected and stable.
But it seems we are hitting a complexity barrier now. This is very frustrating because moving to "heavier" processes than hack and slay coding results in a dramatic loss of productivity.
Just to be clear - you're basically time-boxing your work - which is directly relational to having a fixed budget (4 developers x $x per day x 100 days - assuming it's 100 day duration and not 100 day work effort). If that's the case then, on avg. you would spend:
25% up front planning which includes scope, spec development, technology approach, logistics (computers, servers, work space), resource gathering.
50 % development - test case (TDD) development, schema design and implementation, front end coding, backend coding, deployment
15% Testing - basic break/fix activities
10% overhead/management - project management, communication and coordination.
Very rough est. - many 'areas' to consider including resource skills/maturity, technology being used, location of resources (one room or across the country), level of requirements, etc. The use of 'skill specific' resources would make planning more difficult since you might need the resources to perform multi-roles - one suggestion would be to get 3 generalists who can help spec/design/plan and one tech wizard that would ensure the platform and database are setup correctly (key to success once you have good as possible requirements)
That is truely a tricky questions. To give an somewhat exact estimation on the ratio of time you need to apply for each step - if we take a classical approach of design, implement, test and deploy - one needs to know the specification and the expertise of the project members.
If you take McConnell's book "Software Estimation" (which I highly recommend) you have a chapter in their about historical data and how to use that for future projects.
I do not think, that you have exact historical data of former projects - well - I don't have one - although I always remind me of recording them ;)
Since the smallest failures or uncertainties in the design-phase are the most crucial ones take a lot of time to specify what you want to do. Make sure that everyone understands it the same way and write it down.
To cut a long story short - I'd put 50% - 75% of the time in the design (if 75% this would include a prototype to clear all uncertainties) and equal parts in implementation and test.
If you are using TDD you would mix design and test a bit so you would take a bit of the design-phase and add it to the test-phase.
Building a list of client needs 1-2 days
This depends on the client and what they need and how well prepared they are.
Designers do initial sketch ups 2-3 days
A bit of branching happens here as 2 and 3 will happen concurrently.
Programers build any functionality not already in our existing system 1day - 1 month
This depends on the client, and what they need more then most anything else.
This also will only produce functional code.
Repeat steps 2&3 until the client is happy with the general feeling of what we have.
Could be 1 iteration could be 100 (not likely if by 10 we couldn't make them happy we'd send them somewhere else.
Build final design 1-5 days
This is the final, no error, valid CSS/HTML/JS, everything is cross browser ect
Build final functionality 2-3 days
This code is "perfect" it works 100%, it is pretty, there are no known bugs, and the developers are happy to send it
This and Step 5 happen concurrently.
Deploy 10 seconds.
Then 2weeks, 2 months and 6 months later we do a review to make sure there have been no problems.
So if you skip the review this usualy takes 8-20 days, IDK how you'll work that into 100 days.
If we are just building an application (or extending one) for a client we would spend 2-3 defining EXACTLY what they need then however long it takes to build it.

I need this baby in a month - send me nine women!

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
The proposed individuals to be added to the project must have:
At least a reasonable understanding of the problem domain of the project
Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
Have good communication skills
Be highly motivated (e.g. be able to work independently without prodding)
The existing team members must have:
Excellent communication skills
Excellent time management skills
The project lead/management must have:
Good prioritization and resource allocation abilities
A high level of respect from the existing team members
Excellent communication skills
The project must have:
A good, completed, and documented software design specification
Good documentation of things already implemented
A modular design to allow clear chunks of responsibility to be carved out
Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
Were late before you started (more
stuff than time) and/or
slipped 1hr, 1day at time.
Hope that helps!
It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
Maybe if the following conditions apply:
The new programmers already understand the project and don't need any ramp-up time.
The new programmers already are proficient with the development environment.
No adminstrative time is needed to add the developers to the team.
Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
If the new people focus on testing
If you can isolate independent features that don't create new dependencies
If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
I suppose the adding people toward the end of the work could speed things up if:
The work can be done in parallel.
The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
Self-contained modules that have yet to be started
Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.

Resources