As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Do you think that project iteration length is related to project team size? If so, how? What other key factors do you use to recognize correct iteration length for different projects?
Iteration length is primarily related to the teams ability to communicate and complete a working version of the software. More team members equals more communication channels (Brooks's Law) which will likely increase your iteration time.
I think that 2 week iterations, whether you deliver to the client or not, are a good goal, as it allows for very good health checks.
Ultimately, the iteration length will depend on the features you wish to implement in the next iteration, and in the early phases your iterations may jump around from 1 week to 1 month as you become comfortable with the team and the technology stack.
One of the main drivers for short iterations, is easing integration between modules/features/programmers. Obviously, the bigger your team, the more integration you will have. It's a tradeoff: short iterations mean you're integrating often, which is good - BUT if its a big team you'll be spending a LOT of team on integration overhead, even without new code. Longer iterations obviously mean more integration each time less seldom, and a lot more risky.
If your team is very large, you can try branched integration, i.e. integrating small subteams often, and integrating between the teams less often... BUT then you'll be having inconsistencies between branches, and you lose much of that benefit right there.
Another key factor to consider is complexity - obviously complex, backend systems are riskier integration, simple Web-UI pages are less risky.
(I realize I didnt give you a clear cut answer, there is none. It's always tradeoffs, I hope I give you some food for thought.)
My experience is that the length of the iterations are somewhat dependent on team size,
External dependencies, like in cases where we had to integrate with in house systems that was not using a iteration based development cycle (read waterfall) where another factor we observed.
Our team were real noobs when it came to iterative development, so in the beginning the iterations where really long (12 weeks). But later on we saw that there was no need to worry, and the iterations shrunk considerably (4-6 weeks).
So another factor in how long the iterations are is how familiar you are with the concept of iterative development.
I think that 2 week iterations, whether you deliver to the client or not, are a good goal, as it allows for very good health checks.
2-week iterations are most comfortable for me and the kinds of projects I usually do, but I disagree that not delivering is a good outcome - the focus needs to stay on the "working software over process" side of things.
I would consider making iterations longer if the product owner / user isn't available, even if only for a showcase every couple weeks, as the same health checks that fast iterations allow on the technical side need to happen on the side of the engagement with the business.
Iteration length should be decided on many factors... and team size is really only part of the considerations made for the "Overhead of Iterating".
This article explains many of them.
The important ones IMO:
Overall Release Length
How Long Priorities Can Remain Unchanged
The Overhead of Iterating
There is relation in terms of how much work can get done but there are a couple of other key factors here like what type of project are they working on, e.g. Windows Application, Console Application, or Web Application as well as how developed is the codebase in terms of size, complexity and stye compared to the current team's style, and what expertise does the team have both within the methodology and the work that they are doing as inexperience may be costly in terms of getting everyone proficient with the process.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a member of a software development team, working on a small project.
We think that we can release a beta quality product after 2 or 3 month of continuos work.
Since this is our first teamwork, I decided to ask, which software development methodology would you suggest for a small project with small number of developers (less than 10)?
There are two approaches to software development:
Write down what you are going to do, do it, then agree that you have done it.
Start developing stuff, agree that what you have done is good, repeat until finished.
Both have their adherents and both pop up repeatedly under a variety of names. Each new generation of software developers (ie about every 2 years, this is a fast changing industry and software developers have the lifespan of a mayfly) rejects the previous generation's approach, re-discovers the approach used by the generation before last, renames it something funky and declares it to be the ONE TRUE WAY.
The choice between the approaches ought to depend on the culture of (a) the customer organisation and (b) to a lesser extent, the culture of the supplier organisation (ie your software developer team).
So, if you work for a buttoned-down conservative enterprise approach 1 is indicated. If you look down and see that you are wearing surf shorts and came to work this morning on your skateboard, go with approach 2.
And, in case you have read this far, the most serious bit is the paragraph before the one before this final one, ie the one starting 'The choice ...' This is a cultural / organisational issue rather than a technical one. Both approaches have been used on many many successful projects, neither has a monopoly on unsuccesful projects.
This really does depend on what you are intending to build. If the project is going to be something you want to build upon and have regular intervales something like Agile / Scrum would be very suited.
But it really depends on what the project is to determine release iterations and the like etc.
I think that you need to start from Joel Test and try to implement most of this list:
http://en.wikipedia.org/wiki/The_Joel_Test
And as product development use KISS = Keep It Simple & Stupid, for first release
Also really good start is Getting Real book, available free from 37 signals:
http://gettingreal.37signals.com/toc.php
This really does depend on your customer.
If the customer can accept fixed
time, fixed resources, fixed quality
(100% working code), and slightly
variable scope, I recommend choosing
an agile methodology.
If the customer cannot accept the
above, i.e. the pre-condition for
using an agile methodology is not
present, I recommend choosing any
methodology you like.
The important thing is that you do have a methodology, learn what is working as you go, and use the knowledge to adapt the methodology.
Don't do waterfall, this never worked and will never work. Thinking waterfall is a working methodology is like thinking banging your head against the wall is good, because even the sturdiest wall MUST crumble at some point.
I'd go with a reasonable agile methodology, like Scrum (XP is a bit harsh). Also, introduce things like TDD, DDD, DBC and you should be fine.
I wont suggest this as THE best answer, without having a better idea of the context and circumstances, but I am personally becoming a fan of the Lean / Kanban approach. In general I find a lot of the agile / scrum methods can be fairly developer focused, and almost anti-manager sometimes, which is sometimes appropriate but not always. The lean approaches tend to address the entire value stream rather than just the development itself.
You can read more about it at :http://www.limitedwipsociety.org/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have a question which is not strictly-speaking programming related, but nevertheless caused by being an analyst and a programmer at the same time.
It's about starting new projects which seem to be unfeasible, because they have an unknown domain, lack specifications, and/or require technology which I am not familiar with. I get some sort of panic when I approach such a project, and then relax as I proceed along with domain and technology understanding.
Is this something you experience? How do you cope with it?
The best way that I know of to try to contain and control the human factors in a project is to have a clear idea of your own processes.
Start with some Domain Driven Design, work with the users and help them to understand their domain and the business processes that surround the domain. Often developers are far better at abstraction than the managers/business people so we can often help them to understand their own domain.
Build up a set of acceptance criteria, these form your tests which actually form your spec.
Once you have an idea of the above you know much more about feasibility and how long it will take (and even if the technology that has been specified is the right one)
As for approaching new technologies, start small, build a proof of concept and make your mistakes there rather than on production code. There is a huge amount of best practice on the web and places like StackOverflow are good places to start.
I would suggest working in an agile fashion, get the project owners to prioritise the work that needs to be done, work out what is needed for the next two week sprint and deliver it (which may mean stubbing out a lot of functionality). They'll tell you when it is wrong and it may influence their own decision making.
Don't view the entire project as a nasty whole, break it down into deliverable sections and one step at a time.
Calm down.
If the project is initially infeasible (even if only in your own mind) then start with a feasibility study. This is a sub-project with which you will define the project (or at least the next sub-project).
You've already defined several major tasks within the feasibility study: learn about the domain, write some specifications, learn enough about the new technologies.
As for me, no I never panic about this sort of situation, I love starting with a blank sheet of paper, and experience has taught me how to start filling it in real quick.
So, take a few deep calming breaths and jump in.
Yep, I get this felling all the time. But I always think that technologies are like tools. Once you got how to handle then, the rest will be easy.
Whenever I don't feel like that is when disaster lurks! It's like eating an elephant, just do it one bite at a time. Do some part you do understand, and that gives a handle to the next bit.
unfeasible,
unknown domain,
lack specifications,
require technology which I am not familiar with
I think that's how we start our life too. As long as you are confident that you can pull it off, just stick to it and you will see that things are working in your favor provided:
You understand the importance of being Self Starter
You take responsibility for who you are
You ask the right questions at right time
All the best!!!
Often the trouble with these infeasible projects is that the client is on a limited budget and will go bust before you complete your feasibility study. In this case it might be worth taking a step back from technology and looking at economics. May be sub-contracting to someone with the required knowledge will ease the pain.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At what point in a team's growth must process change drastically? A lone coder can get away with source control and a brain. A team trying to ship large prepackaged software to local and international markets must have a bit more in place.
If you've experienced a large transition in 'process': Was the team's process successfully changed with the current members or was the team itself mostly replaced by the time the process change came? What were the important aspects that changed, were some unnecessary?
You are going to find it hard to get a quantitative answer to this. It really depends on how "wide" the project is. If there is one simple subsystem to work on, any more than 1 person will result in people stepping on other people's toes. If you somehow have a project with 8 segregated subsystems, then you can have 8 developers no problem. More likely, you will have several subsystems that depend on each other to varying degrees. Oh, and all of this is true regardless of the process you are using.
Choosing what type of process you want to use (spiral, scrum, waterfall, CMM, etc.), and how heavyweight a version of that process you want to implement is, is another problem, and it's difficult. This is mainly because people try to take processes that work in building construction, factory work, or some other industry that is nothing like software and apply it to software development.
In my opinion, McConnell writes the best books on software process, even though none of his books are process books, per se.
If memory serves me correctly anything above five people is where things get dicey. The number of paths of comunication between the team gets really large after that.
(2 people = 1 path, 3 = 3 paths, 4 = 6 paths, 5 = 10 paths and so on).
The last three places I've been the IT team went through a massive process change. Yes, you will lose people, probably some of the better ones too. it's not that they are stubborn and trying to stick to the old ways, it's just that a change like this will cause a mass amount of stress. There are deadlines to hit and a need for quality to be met. People will get confused about what process they are supposed to do, and many will fall back to the "old ways." (i've been guilty of this too I admit.)
The key to succeeding is to take it slow and in small steps. People need to take time to understand why the process is changing and how it benefits them. That is huge, if you don't take time to do this, it won't succeed, and key people will end up quitting causing turmoil.
One of the things to absolutely remember is that ultimately some turnover is good. It brings new ideas and people with different (and sometimes better) skill sets. You shouldn't try and force change onto people rapidly, but they shouldn't be a barrier either. If they don't agree with what is going on, they should either try and come to a middle ground with the people making the process or leave. One of the real eye openers I learned at my first job is that in reality everyone is replaceable. Someone will eventually be able to step in take the reigns.
In my experience this transition occurs at exactly the moment at which you also need management. It is hard to get above 8 developers without some over-arching coordinating function being in place, whether it is a team lead, segregation of tasks or old fashioned management. The reality I have witnessed is that even with the best, most talented, most bought-in developers you still need coordination when you get above 8 working concurrently.
And there is a discontinuous step in process as you cross that boundary. However it needn't be catastrophic. The best approach is for the team to adopt as much formal process as it can when still small so that all the necessary behaviour and infrastructure is in place. I would argue that this is synonymous with good development in any case, so even the lone developer ought to have it (source code control, unit tests and coding standards are all examples of what I am talking about). If you take this approach then the step up in process when it occurs is not so much a jolt as a rigorous coordination.
Every developer you add need to be brought in with the process already in place. When you get to 8 (or whatever the number turns out being for you) then you'll find that your team meetings get a little too loose and wordy, personalities start playing a part and it is more effective to divide activity up. At that moment your boss (or you if you are the boss) will need to appoint one or more coordinators to distribute and control work.
This process (should) scales up as well if you stick to your processes. Teams can sub-divide or you can bud teams out to perform functional tasks if necessary. This approach also works regardless of the methodology you have chosen for your project management, be it Agile or not.
Once you get up to 4 or 5 teams, i.e. 30-50 people then you will almost certainly need people who are happy that their sole task is coordination.
If you are small now and contemplating or expecting the complexity shift, then make sure you get your fundamental processes nailed down immediately and certainly before you start adding more staff.
HTH and good luck
A lot depends on the people working on the project, the integration points, etc. Even a single coder should have a code management tool and needs to interact with clients or the 'boss'. From experience, I've seen a change once there are more then 2 people and then probably any increase of 50%. If teams are organized by project teams, focused on decoupled parts of the product, the overhead increase will not exponentially increase as the team size increases (project vs. matrix organizations).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!