I am trying to implement Trac+SVN. But am encountering a project management issue. To give you a background, most of my projects are related to web development (they go thru phases like design, programming, testing etc.).
Now I am implementing Trac for my projects. Now the problem is what should I place as milestones and tickets. For tickets how granular should I get? e.g. should I say Make X part of Y feature or Make Y feature only. The more tickets I make, the more time I spend making these tickets.
Also, for milestones, I have seen projects like CakePHP etc. When they use Trac, they set their milestones as version numbers (corresponding to tags in the SVN). Is that the best way?
So say I have a client whose final deadline is X date. Then I set my milestone as 1.0 with deadline as X. But then how do I track the project say weekly? Cause I don't want to realise one day before the release date that that too much is left. I want to have somehow weekly checks.
Also I want to take into account enhancements/bugs also as tickets and club them together as milestones.
Ive imagined something like 1.x.x where first x corresponds to group of feature enhancements while second x corresponds to bug fixes. Is there a better way? How do I manage weekly status in such a system?
Is there a standard way to do this? How do I go about it? Am totally confused.
Thank you.
Well, it depends. You didn't specify how big project, how many programmers will work, how often do you plan to deliver.
Stating that, here's how we use Trac on one big project spanning several years that consists of number of smaller subprojects.
Milestones are defined as points where we have some features in subproject ready for delivery. First milestone in each subproject is usually the longest. We usually name milestones as "Subproject Name v0.01". Versions are just increments 0.01, 0.02, ... When we implement everything expected for subproject we mark last milestone as v1.00. Subsequent bug fixes go to milestone that we mark "Subproject Name - v1.00 - bugfix"
Milestone description contains only list of new features or bug fixes. Documentation is written in wiki and in tickets.
Trac Wiki usually have at least one page about new features that will be implemented in specific milestone. It is usually higher level description of expected behavior of application. Often, there are examples of expected results application should produce.
Tickets contain detailed description of feature or bug that have to be implemented.
Bug reporting tickets contain description of bug and steps to reproduce (almost always).
Feature tickets contain detailed description of feature that must be implemented. One ticket contains work for up to 6 hours. When we plan work, we divide features to be in range from 1 - 6 hours of work. If we estimate that feature needs more time, than we split it in several tickets so each of them can fit in 1-6 hours of work. We picked 6 hours because we feel it is the top that we can estimate with error not bigger than 30% (meaning that this 6 hours estimate almost always can be done in range between 4-8 hours). Of course, there are exceptions from this stats. In our experience, main reason for wrong estimates is in bad specifications that we wrote. That, almost always, happens because we (developers) misunderstood business requirements of our users.
There are few Trac plugins for estimating and time tracking. Check this page: http://trac.edgewall.org/wiki/TimeTracking . We use Timing And Estimation Plugin
. You can enter estimated time for ticket and time spent working on ticket. Then you can get reports how much time you spent on tickets/milestones and how much time you need to finish.
After two years, we can pretty accurately estimate time needed to do some work. When we correctly understand users needs and requirements, we usually can deliver in promised timeframe. Currently, our stats show that we overestimate time needed for tickets for about 10%.
A small caveat up front: I have no idea about using Trac... or SVN. I think your milestones shouldn't be set by the version control / bug tracking system.
Typically milestones are just significant events in your project. They should be significant to all stake holders. The completion of a major deliverable is a milestone. The completion of a few features isn't. Sign off on all plans and contracts is a significant event, but the completion of 10 mockups isn't.
I tend to use the schedule and tasks for working with the team. Tick tasks off as they are done. To everyone else i just report on milestones. Are we going to make UAT by May 15th? Yes we are.
Since milestones are tools for reporting to sponsors and other stakeholders you should set them to be what they think is important. My sponsors will want to know when a certain core set of features is completed, so that is a milestone. They'll want to know when UAT is signed off so that is a milestone.
Set too few milestones and no one will know how you are progressing until the end. Set too many and the value will be lost.
There is no magic formula, but projects with hundreds of tasks and thousands of man hours may only have 4 milestones.
alt text http://officeadd.in/Images/articles/ProjectMilestones-scribblea.png
Sorry this doesn't relate to Trac and SVN directly, but hopefully this gives you a rough idea on how milestones are generally used. Oh and apologies in advance for the overuse of Comic Sans... yuck.
Setting your 1.0 milestone to the deliverable date is fine, but you'll want to define earlier milestones - make them weekly if that's a good interval for you, and number them appropriately. For a 4 week project, maybe 0.2, 0.5, 0.7, and 1.0 would work. List relevant bits on each milestone: 'Design complete', 'Coding complete', 'Testing complete', etc. If you're not on target, then the real project management work starts!
I see you have several options and a couple of decisions to make.
You could think about Feature Driven Development. You could use trac to support communication rather that control. Coarse-grained tasks, fine-grained tickets and early releases.
Make a list of the features to be developed and state that the release, say... version 1.0, happens when all the features are developed and tested. Make umbrella tickets for all features. There are coarse-grained and will define development rhythm.
Now define a couple of milestones based on the amount of features planned and time. The first milestone should contain at least one feature, since the goal of a milestone is get the project built for testing and feedback. Define one or more milestones to mark when all features are completed, call them "beta", "release candidate" or whatever.
If during development there is a need to finer-grained tasks, don't be shy in making them. And make the umbrella tasks dependent on these newer tickets.
A bug report doesn't need to be under any of those, and can have as much detail as needed. These are fine-grained. These will not define development rhythm. One exception is a bug squashing sprint to eliminate showstoppers. Publish the names of the developers with more assigned and not solved bugs to compel them to solve the issues.
Part of the process of making a milestone, a beta or a release candidate is tag the source to make the process repeatable, and be able to spot bugs even when the trunk source has already changed. On SVN, the usual way to tag consists of copying the trunk source to a directory on "tags" and make sure nobody commits into that branch.
I believe a two number version number is enough for most cases. The first number denote compatibility and the second, the release. But there are several variables that can be inside a version number: source compatibility, binary compatibility, bugfix level, release, companion product version (ala oracle), protocol compatibility, etc.
I've been using Trac/SVN for two and a half years now.
Here is what I suggest:
Split production of software version into several iterations: Inception, Elaboration, Transition (or call them whatever you want)
Plan features for the very first iteration. For others plan enhancement and bugfixes
Tasks (tickets) should be as granular as possible provided each ticket has a client-valuable deliverable
Saving time on ticket creation is not a good idea. More granular and smaller tasks --- more control over the progress. Thus, earlier discovery of planning shortcomings and more time to manage out.
Tickets can split even when in progress. If developer reached the result that can be shown to the customer but did not complete the whole task, then developer can split the task and mark the completed part as "closed" or "resolved" which gives some more granular control.
Track the progress daily, not weekly (or at least several times a week)
The Trac is a very nice tool. The best feature or Trac is ability to put WikiLinks everywhere, including changeset comments. If you demand putting ticket # in changeset comment and then putting changeset number to the ticket comment this links the tasks and changes to the code. Later these links make it easier to track the evolution of the software. It is a life saver especially if the project goes beyond a couple of month in duration.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two projects, with identical priorities and work hours demand, and a single developer. Two possible approaches:
Deliver one project first.
Split the developer's time and deliver both later.
I can't see any reason why people would choose the second approach. But they do. Can you explain me why?
It seems to me that this decision often comes down to office politics. One business group doesn't want to feel any less important than another, especially with identical priorities set at the top. Regardless as to how many different ways you explain why doing both at the same time is a bad idea, it seems as though the politics get in the way.
To get the best product to the users, you need to prevent developer thrashing. When the developers are thrashing, the risk of defects and length of delivery times begin to increase exponentially.
Also, if you can put your business hat on, you can try to explain to them that right now, nobody is getting any value from what the completed products will deliver. It makes more sense for the business to get the best ROI product out the door first to begin recouping the investment ASAP, while the other project will start as soon as the first is finished.
Sometimes you need to just step away from the code you have been writing for 11 hours in order to stay maximally productive. After you have been staring at the minutiae of a system you have been implementing for a long time it can become difficult to see the forest for the trees, and that is when you start to make mistakes that are hard to un-make.
I think it is best to have 2-3 current projects; one main one and 1-2 other projects that aren't on such a strict timeline.
If both projects have the same priority for the company, one obvious reason is for project managers to give higher management the illusion that both of the projects are taken care of.
Consider that the two projects could belong to different customers (or be requested by different people from higher management).
No customer wants to be told to wait while a different customer's project is given priority.
"We'll leave the other one for later" is, a lot of times, not an acceptable answer, even though this leads to delays for both projects.
I believe this is related to the notion of "Perceived Responsiveness" in a software program. Even if something takes more time to do, it looks faster when it appears to be doing something, instead of idly waiting for some other stuff to complete.
It depends on the dependencies involved. If you have another dependency upon the project that can be fulfilled when the project is not 100% complete, then it may make sense to split the developer's time. For example, if your task is large, it may make sense to have the primary developer do a design, then move on to a second task while a teammember reviews the design the primary developer came up with.
Furthermore, deserializing developers from a single task can help to alleviate boredom. Yes, there is potentially significant loss in the context switch, but if it helps keep the dev sharp, it's worth it.
if you go by whats in the great and holy book 'peopleware', you should keep your programmer on one project at a time.
the main reason for this is that divided attention will reduce productivity.
unfortunately, because so many operational managements are good businessman rather then good managers, they may think that multitasking or working on both projects somehow means more things are getting done (which is impossible, a person can only physically exists in one stream of the space-time continuum at one time).
hope that helps :)
LM
I think the number 1 reason from a management standpoint is for perceived progress. If you work on more than one project at the same time stakeholders are able to see progress immediately. If you hold one project off then the stakeholders of that project may not like that nothing is being worked on.
Working on more than 1 project also minimizes risks somewhat. For example if you work on one project first and that project takes longer than expected you could run into issues with the second project. Stakeholder also most likely want their project done now. Holding one off due to another project can make them reconsider going ahead with the project.
Depending on what the projects are you might be able to leverage work done in one for the other. If they are similar then doing both at the same time could be of benefit. If you do them in sequence only the subsequent projects can benefit from the previous ones.
Most often projects are not a constant stream of work. Sometimes developers are busy and sometimes not. If you only work on 1 project at a time a developer and other team members would likely be doing nothing while the more 'administrative' tasks are taking place. Managing the time over more than one project allows teams to get more done in a shorter timeframe.
As a developer I prefer working on multiple projects as long as the timelines are reasonable. As long as I'm not being asked to do both at the same time with no change in the schedule I am fine. Often if I'm stuck on one project I can work on the other. It depends on the projects though.
I'd personally prefer the former but management might want to see progress in both projects. You might also recognise inaccurate estimates earlier if you are doing some work on both, enabling you to inform the customer earlier.
So from a development perspective 1 is the best option but from a customer service point of view 2 is probably better.
It's managing your clients expectations; if you can tell both clients you are working on their project but it will take a little longer due to other projects then to say we are putting your project off till we finish this other project the client is going to jump ship and find someone that can start working on their project now.
It's a plaecbo effect - splitting a developer between two projects in the manner you've described gives people/"the business" the impression that work is being completed on both projects (at the same rate/cost/time), whilst in reality it's probably a lot more inefficient, since context switching and other considerations carries a cost (in time and effort).
On one hand, it can get the ball rolling on things like requirement clarifications and similar tasks (so the developer can switch to the alternate project when they are blocked) and it can also lead to early input from other business units/stakeholders etc.
Ultimately though, if you have one resource then you have a natural bottleneck.
The best thing you can do for that lone developer is to intercept people( from distracting that person), and try to carry some of the burdon around requirements, chasing clarifications and handling user feedback etc.
The only time I'd ever purposely pull a developer off their main project is if they would be an asset to the second project, and the second project was stalled for some reason. If allowing a developer to split a portion of their time could help jump-start a stalled project, I'd do that. This has happened to me with "expert" developers - the ones who have a lot more experience/specialized skills/etc.
That being said, I would try to keep the developer on two projects for as little time as possible, and bring them back to their "main" project. I prefer to allow people to focus on one task at a time. I feel that my job as a manager is to balance and shift people's priorities and focus - and developers should just develop as much as possible.
There are three real-life advantages of splitting developers' time between projects that cannot be ignored:
Specialisation: doing or consulting on work that requires similar specialised knowledge in both projects.
Consistency and knowledge sharing: bringing consistency into the way two separate products are built and work, spreading knowledge accross the company.
Better team utilisation: on a rare occasion when one of the projects is temporarily on hold waiting for some further input.
Splitting time between several projects is beneficial when it does not involve a significant change in context.
Having a developer to work single-handedly on multiple software development projects negates the benefit of specialisation (there isn't any in the case), consistency and knowledge sharing.
It leaves just the advantage of time utilisation, however if contexts differ significantly and there is no considerable overlap between projects the overhead of switching will very likely exceed any time saved.
Context switching is a very interesting beast: contrary to its name implying a discreet change the process is always gradual. There are various degrees of having context information in one’s head: 10% context (shallow), 90% (deep). It takes less time to shallow-switch as opposed to fully-switch; however there is a direct correlation between the amount of context loaded (concentration on the task) and output quality.
It’s possible to fill your time entirely working on multiple distinct projects relying on shallow-switching (to reduce the lead time), but the output quality will inevitably suffer. At some point it’s not only “non-functional” aspects of quality (system security, usability, performance) that will degrade, but also functional (system failing to accomplish its job, functional failures).
By splitting the time between two projects, you can reduce the risk of delaying one project because of another.
Let's assume the estimate for both projects is 3 months each. By doing it serially, one after the other, you should be able to deliver the first project after 3 months, the second project 3 months later (i.e. after 6 months). But, as things go in software development, chances are that the first project encounters some problems so it takes 12 months instead. Or, even worse, goes into the "in use, but never quite finished" purgatory. The second project starts late or even never!
By splitting resources, you avoid this problem. If everything goes well with the second project, you are able to deliver it after 6 months, no matter how well the first project does.
The real life situations where working on multiple projects can be an advantage is in the case where the spec is unclear (every time) and the customer is often unavailable for clarification. In those cases you can just switch to the other project.
This will cause some task switching and should be avoided in a perfect world, but then again...
This is basically my professional life in a nutshell :-)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose you're the product manager for an internal enterprise web application that has 2000 users and 7 developers. You have a list of 350 future features, each ranging from 5 to 150 developer days of work.
How do you choose what features to work on, and how do you run the release process?
Here's what I'm thinking: (skip if boring)
Release Process. Work on several features at once, release each individually when it's ready. The other option (what we've been doing up to this point) is to pick out a certain set of features, designate them as "a release", and release them all at once (announcing via mass email).
The advantage of a shorter release process is that we can release features as soon as we finish development. The advantage of a bigger process is that it's easier to organize.
Feature Prioritization. Put all the future features in a spreadsheet with columns for feature, description, comments, estimate, benefit, (your) estimate, (your) benefit. Give copies to 2 senior engineers, the other senior project manager and yourself.
The engineers estimate all the features (how precisely? consulting each other?). To determine benefit, everyone allocates points (total = 10 * [number of future features]) among the future features (without consulting each other?), compare scores and average (?).
Another potential strategy here is to just rank each feature on an absolute (say) 1-100 scale. Having an absolute ranking is nice because it makes prioritizing as our feature list changes easier (we don't want to have to redistribute points every time someone proposes a new feature).
What's your strategy? Do any books / websites attack the problem at this level of detail?
There's a great book that helps cover this topic called Agile Estimating and Planning by Mike Cohn. It has some great ways to estimate and plan releases. Including a planning game called planning poker where the engineering team gets together with cards to estaimate user stories. Each engineer plays a card 1,2,3,5,8,13 face down. The high and low card explains and you do it again. After 1 or 2 repeats there is generally convergence on the same estimate.
There's also Beyond Software Architecture: Creating and Sustaining Winning Solutions by Luke Hohmann which might help with some of the product management related pieces and the reasoning to use to prioritization. I have not yet read the book but I went to a talk by Luke Hohmann where he covered the subjects of his book and I can't wait to read it.
Also I would recommend reading books on various Agile Development processes such as Scrum, Crystal Clear, and XP. There's Agile Project Management with Scrum by Ken Schwaber and Crystal Clear: A Human-Powered Methodology for Small Teams by Alistair Cockburn. Also Extreme Programming Explained: Embrace Change (2nd Edition) by Kent Beck and Cynthia Andres.
As for feature prioritization, that is generally done by the stakeholders. You need to work on the features that address the needs of your stakeholders, which, as Luke Hohmann points out, includes the system architecture.
However, one of the most important things is to make sure that you have agreement on the software development process from the team. If you force a process and the team doesn't believe in, then it will not work.
Surely you don't have 350 independent features, some must depend on others.
Put them all into some task management software which allows you to define which tasks depend on which other ones, and you might soon find that you've got a much easier decision process...
As for the release process, you could introduce the features when they are ready and inform the users via a company blog that is updated whenever a new feature is done. Such a blog entry should then give a short overview about the feature, where to find it, how to use it, etc.
Not only does this keep your users curious and coming back, it also offers a great way of potential customers to check out the progress of your offering.
As for prioritizing future implementation: how about involving the customers there as well? Look at uservoice (it is used to track requests/bugs for this site). It offers a nice way of letting the users vote on most desired things as well as showing what is being worked on and what is planned.
"rank each feature on an absolute (say) 1-100 scale"
Build them in order.
Release them when you've got (a) significant value or (b) critical mass of small things.
Always work in priority order. Build the most important stuff first. Deliver as much value as quickly as possible.
a few people here have already said it - involve the end users in the decision process of what goes in and what waits. after all, its not about whats useful to you, but whats useful to your end user.
that said, i wouldnt leave it open to 'all users to decide'; there should be a representative from the user group who you work with (i.e. senior user role).
even then, you arent saying "what features do you want?" to the user, you ask them what functionality they would like to see arrive next. the reason why you put it to them that way rather then letting them pick off a massive spreadsheet of individual features is two-fold: 1) they dont know about dependancies, 2) you want to gather together a pack of features for a logical release.
so the user representative may say "we need to have the photo gallery working next". they might not be aware that the photo galery is practically the same as the file upload module (it just accepts different file types).
so, in the next release version, you pack together the photo gallery and the file upload - why wouldnt you, considering that the file upload is like 75% done because of the work that went into the photo gallery module?
i dont believe you necessarily have to work on the hardest features first, its what the users need sooner + what other features you gather together to make a 'logical pack'.
to a certain extent, you want to clear the feature log too. so for example, you could have the following features and estimaed times:
Registration Form - 3 hrs
Photo Gallery - 8 hrs (<- client has said they want this next)
File Upload - 2 hrs
Voting/Poll module - 7 hrs
Stock Ticker - 5 hrs
out of these contrived features, i would take no. 2 (because the client is asking for it), then i would take no. 1 and 3. no. 3 because its practically done when the gallery code has been done, and no. 1 purely because its the smallest estimate out of the remaining features. nothing will give you or your coding crew the feeling of progress on your project like seriously beating down the feature list (it will probably refill though).
as far as letting people know about a new release and whats in it, i would do it via email (rather then by blog or within the program itself). and i would make it as brief as possible, bullet points, something like this:
===
Version 1.1 of Blue Widgets has just been launched and is available for your use now.
The following has been added:
Photo Gallery
File Upload
Registration Form
The user manual within the system contains more information on how these features work.
===
bang - done, make it as easy for people as possible.
LM
Let's say you work 100 days on a project. How many days would each phase of your process (requirements analysis, specification, etc.) take?
I'm interested also in the ratio of specific activities in every phase, such as writing tests, back-end coding, front-end coding, visual design, database design etc.
Many thanks!
EDIT:
Just to make things clear, I'm not talking about web site design - I'm interested in more "serious" web development, such as custom business web applications. I know, everything depends on the specifics of each project, however I suppose the ratios could be roughly the same from project to project.
EDIT2:
As Helen correctly remarked, this question is really hard to answer, since projects can be so different and so can be teams. To make it more specific, let's say you have a team of four developers - two of them for back-end work, one for front-end programming and one for design & html/css coding (one member of the team acts as a project manager) and you are supposed to develop StackOverflow.com site.
We're running agile scrum projects, so we typically run all these activities in parallel. So while I cannot answer your exact question, I can give you some ideas of the ratios we have found to be effective:
4-5 developers can be served by one client side programmer (html/css), one on-team tester and one interaction designer (works with the customer to design wireframes). A team like this typically needs a 50% graphic designer for most applications, but your mileage may vary there. Then there's project manager, and there's all sorts of other stakeholders that are not part of the core development team.
In the development team you normally have a couple of developers who are sharp on client side development and similar amount on the back-end. These staffings also tend to reflect resource usage ;) Testing is an integral part of development as well as the efforts of the on-team tester.
Your local conditions may of course vary, but these numbers are just to give you some idea.
Step 1: denial
Step 2: anger
Step 3: acceptence
The time each step takes is different for all team members involved.
I agree with everyone who started along the lines of, "It depends on the project".
On the other hand, I do think that there's a consistent process that can be followed; only tweaking the percentages of effort to match the project:
Typically, I follow these basic principles:
Discovery - determine the feature/functionality of the system. The easiest (and worst) thing to do is accept what's being asked for and go with it.
For example, "building stackoverflow.com" is a pretty broad request - and is actually the wrong request. The project has to start with "I need an online location where programmers can collaborate".
Based on that one thing you're trying to solve, you can drill down into all the details you want - like how a question will be answered, asked, rated, etc.
I think this is the most crucial step! output = requirements/specification; 20/100 days can safely be spent here
Wireframing - this is where I like to use basic HTML pages, paint.NET, or even construction paper and glue to mock up every aspect of the final site's functionality. I like using paper because it's easy to make changes :)
Going through this process forces you to consider just about every aspect of the user experience and gives you the flexibility to add/remove features and adjust your requirements as needed. Your customer has some input to changes before you've committed a bunch of time to writing code.
An added bonus is that you get to use paste :)
10/100 days
Implementation/Testing - I group implementation AND testing together because I think it's short sighted to develop a whole site without testing along the way. (At the same time, you still need step 4). This is the part where the rubber hits the road. If you've handled your client properly in steps 1 and 2, you'll be pleasantly writing your code without any last-minute changes in scope (or at least very few). I try to follow a general set of steps for implementation:
data develpment (db design, query design, sample data setup)
site framework (set up your environment(s); production, dev, and qa)
front-end structure (css, standard classes, standard html structures)
start coding!
55/100 days
SQA - hopefully you can get some non-involved parties/end users to test the app out as you go. Test plans need to be developed to ensure that it's clear what should be test and the desired outcomes. I like using real people for testing the front end; automated tools are fine for code/backend modules
This is a good time to let the client see things progressing - they should have very limited ability to make changes at this point.
10/100 days
Delivery/Post Production honeymoon - you've built it, tested it, and you're ready to deploy. Get the code out there and let the client play. You shouldn't have much to tweak; but I'm sure there will be some adjustments.
5/100 days
Some of this seems idealistic; but you'd be surprised how quickly you can ship your application when you've got a well-reviewed, well-created specification.
It is impossible to give a meaningful answer to this question. The ratios will not be even roughly the same from project to project. For some projects the visual design barely matters (as long as it more or less works) but the database is critical and complex. For others, it's all about providing a smooth user experience with lots of AJAX goodies and other eye candy, but the underlying data is trivially simple to organise and store.
It sounds like you're thinking mainly of one-man projects, but for larger teams the size and setup of the team also matters, as well as your development process.
Probably we are an unusual development shop. Our whole existence (at least during work hours) is requirements gathering. Developers are required to als work in every other department. Be it answering the phone in after sales support (and fighting the CRM Software), driving a forklift in the warehouse (and fighting the mobile terminals) or packing crates in the shipping station (and fighting confusing delivery notes).
When we tackle a new project "requirements gathering" was usually an afternoon on the whiteboard, usually with somebody from the department which used the new software most. There was little upfront design and lots of re-factoring and rewriting. We where very happy with this and generated about 100.000 Lines of Code which are well-architected and stable.
But it seems we are hitting a complexity barrier now. This is very frustrating because moving to "heavier" processes than hack and slay coding results in a dramatic loss of productivity.
Just to be clear - you're basically time-boxing your work - which is directly relational to having a fixed budget (4 developers x $x per day x 100 days - assuming it's 100 day duration and not 100 day work effort). If that's the case then, on avg. you would spend:
25% up front planning which includes scope, spec development, technology approach, logistics (computers, servers, work space), resource gathering.
50 % development - test case (TDD) development, schema design and implementation, front end coding, backend coding, deployment
15% Testing - basic break/fix activities
10% overhead/management - project management, communication and coordination.
Very rough est. - many 'areas' to consider including resource skills/maturity, technology being used, location of resources (one room or across the country), level of requirements, etc. The use of 'skill specific' resources would make planning more difficult since you might need the resources to perform multi-roles - one suggestion would be to get 3 generalists who can help spec/design/plan and one tech wizard that would ensure the platform and database are setup correctly (key to success once you have good as possible requirements)
That is truely a tricky questions. To give an somewhat exact estimation on the ratio of time you need to apply for each step - if we take a classical approach of design, implement, test and deploy - one needs to know the specification and the expertise of the project members.
If you take McConnell's book "Software Estimation" (which I highly recommend) you have a chapter in their about historical data and how to use that for future projects.
I do not think, that you have exact historical data of former projects - well - I don't have one - although I always remind me of recording them ;)
Since the smallest failures or uncertainties in the design-phase are the most crucial ones take a lot of time to specify what you want to do. Make sure that everyone understands it the same way and write it down.
To cut a long story short - I'd put 50% - 75% of the time in the design (if 75% this would include a prototype to clear all uncertainties) and equal parts in implementation and test.
If you are using TDD you would mix design and test a bit so you would take a bit of the design-phase and add it to the test-phase.
Building a list of client needs 1-2 days
This depends on the client and what they need and how well prepared they are.
Designers do initial sketch ups 2-3 days
A bit of branching happens here as 2 and 3 will happen concurrently.
Programers build any functionality not already in our existing system 1day - 1 month
This depends on the client, and what they need more then most anything else.
This also will only produce functional code.
Repeat steps 2&3 until the client is happy with the general feeling of what we have.
Could be 1 iteration could be 100 (not likely if by 10 we couldn't make them happy we'd send them somewhere else.
Build final design 1-5 days
This is the final, no error, valid CSS/HTML/JS, everything is cross browser ect
Build final functionality 2-3 days
This code is "perfect" it works 100%, it is pretty, there are no known bugs, and the developers are happy to send it
This and Step 5 happen concurrently.
Deploy 10 seconds.
Then 2weeks, 2 months and 6 months later we do a review to make sure there have been no problems.
So if you skip the review this usualy takes 8-20 days, IDK how you'll work that into 100 days.
If we are just building an application (or extending one) for a client we would spend 2-3 defining EXACTLY what they need then however long it takes to build it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As the title suggest...
How can I apply a scrum process to anything that doesn't work on new code and can be estimated to some degree?
How can I apply a scrum process to maintenance and emergency fixes (which can take from 5 minutes to 2 weeks to fix) type of environment when I still would like to plan to do things?
Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?
Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?
You're using the wrong process for this environment. What you need is a stack/queue management process which is separate to your planned/estimated SCRUM development process.
The reason for this is simple and twofold:
As you mention in your post, it is
often very difficult to estimate
maintenance tasks, especially where
legacy systems are involved.
Maintenance tasks in general and
legacy systems specifically have a
tendency to involve 'curly'
problems, or have a long 'tail',
where one seemingly simple fix
requires a slightly more difficult
change to another component, which in
turn requires an overhaul of the
operation of some subsystem, which in
turn... you get the point.
Quite often when dealing with
maintenance tasks, by the time you
have finished estimating, you have
also finished solving the problem.
This makes the process of estimation
redundant as a planning tool. Those who insist on dividing estimation from solving the problem
for maintenance tasks are
simply adding unnecessary overhead.
Put simply, you need a queueing system. It will have these components:
A 'pool' of tasks which have been
identified as requiring attention.
Newly raised items should always go
into the pool, never the queue.
A process of moving these tasks out of
the pool and onto the queue. Usually a combination of business/technical knowledge is required.
A queue of tasks which are clearly ordered
such that developers responsible for
servicing the queue can simply pick
from the front of it.
A method for moving items around in the queue
(re-prioritising). To allow 'jumping the queue' for critical/emergency items.
A method for delivering the completed items which does not interrupt servicing the queue. This is important because the overhead of delivering maintenance items is usually significantly lower than development work. You don't want your maintenance team sitting around for a day waiting for the build and test teams to give them the ok each time they deliver a bugfix.
There are other nuances to queue management, but getting these in place should set you on the right path.
If you have that much churn in your environment, then your key is going to be shorter iterations. I've heard of teams doing daily iterations. You can also move towards a Kanban type style where you have a queue which has a fixed limit (usually very low, like 2 or 3 items) and no more items can be added until those are done.
What I'd do is try out one week iterations with the daily stand-ups, backlog prioritization, and "done, done". Then reevaluate after 5 or 6 weeks to see what could be improved. Don't be afraid to try the process as is - and don't be afraid to tweak it to your environment once you've tried it.
There was also a PDF called Agile for Support and Operations in 5 minutes that was recently posted to the Scrum Development list on Yahoo!
No one said that backlog items have to be new code. Maintenance work, whether bug fixes, enhancements or data fixes can be put into the Product Backlog, estimated and prioritized. This is actually one of the biggest benefits of using Scrum - no more arguments with users about whether something is a bug fix or an enhancement.
With Waterfall, there's a tacit understanding that bugs are the responsibility of the developers. Somehow, they are on the hook to fix them without impacting the development of new code and features. So they are "free" to the users, but a massive inconvenience to the developers.
In Scrum, you recognize that all work takes time. There is no "free". So the developers freely accept that something is a bug but it still goes into the Product Backlog. Then it's up to the Customer to decide if fixing the bug is more important than adding new features. There are some bugs that you can live with.
As the title suggest... How can I
apply a scrum process to anything that
doesn't work on new code and can be
estimated to some degree?
On the contrary, I've heard teams find adopting Scrum easier in the maintenance phase.. because the changes are smaller (no grand design changes) and hence easier to estimate. Any new change request is added to the product backlog, estimated by devs and then prioritized by the product owner.
How can I apply a scrum process to
maintenance and emergency fixes (which
can take from 5 minutes to 2 weeks to
fix) type of environment when I still
would like to plan to do things?
If you're hinting at fire-fighting type of activity, keep a portion of the iteration work quota for such activities.. Based on historical trends/activity, you should be able to say e.g. we have a velocity of 10 story points per iteration (4 person-team 5day-iteration). Each of us spend about a day a week responding to emergencies. So we should only pick 8 points worth of backlog items for the next iteration to be realistic. If we don't have emergency issues, we'll pick up the next top item from the prioritized backlog.
CoryFoy mentions a more dynamic/real-time approach with kanban post-its in his response.
Basically, how do I overcome unplanned
tasks and tasks that are very
difficult to estimate with the scrum
process? or am I simply applying the
wrong process for this enviroment?
AFAIR Scrum doesn't mandate an estimation technique.. Use one that the team is most comfortable with.. man days / story points / etc. The only way to get better at estimation is practice and experience I believe. The more the same set of people sit together to estimate new tasks, the better their estimates get. In a maintenance kind of environment, I would assume that it's easier to estimate because the system is more or less well known to the group. If not, schedule/use spikes to get more clarity.
I sense that you're attempting to eat an elephant here.. I'd suggest the following bites
Working Effectively with Legacy code
Agile Estimating and Planning
Agile Project Development with Scrum
Treat all fixes and improvements as individual stories and estimate accordingly. My personal view is that things that take less then 10-15 minutes to fix should just be done straight away. For those that take longer, they become part of the current 'fix & improvement' iteration cycle. As with estimating regular requirements, you take a best guess as a team. As more information comes to light, and the estimates are off adjust the iteration and upcoming sprints.
It's hard to apply a iteration cycle to fixes and improvements as more often then not, they prevent the system from working as they should and the 'business' puts pressure for them to go live asap. At this point it may work out better moving to a really short iteration cycle, like one or two weeks.
You ask about how to use a process for emergencies. Try to reserve the word "emergency" for things that require hacking the code in the production environment with the users connected at the same time. Otherwise, stakeholders are likely to abuse the word and call an emergency on anything they would like to have really fast. Lack of process does not mean out of control: somebody must be accountable for declaring the emergency, and somebody (best practice is somebody else) must authorize the changes outside the normal process and then take responsibility for it.
Other than that, the suggestion of using each iteration to complete a number of fixes and improvements is probably the best way to go.
This depends a lot on the application life cycle. If this is a 'Sunset' application to be retried soon, of course, the focus would only be on fixing the top priority bugs.
If the product is 'Mature' and has a roadmap and is continuing to evolve you would have fixes and enhancements to take care of. There's a lot of impetus to keep the design clean and evolve by refactoring. This may mean periodic minor and major releases [except for eFixes - emergency fixes/hotfixes]. You can pratice agile to your hearts delight as enhancement and fixes could be story boarded and made part of your Sprint backlog. The entire list would make your Product Backlog.
Bottom line: If you want to refactor and keep your application design clean [programmers tend to take shortcut if the focus is exclusively bug fixing], it could only happen with a 'living' application. One that is evolved and updated. And agile is natural fit.
Even if you have only fixes (that it's a Turing Complete ;) or Sunset application), it helps if they can all be rolled into a sprint and rolled into production end of each sprint. If the fixes need be rolled into production as and when they're fixed, it's much more difficult to apply Scrum.
We have applied scrum in this context.
Some of the keys of success.
1. Everyone in the enterprise buy the scrum idea ( this is crucial for success )
2. Sprint of about 2 weeks (our 2-3 firsts sprint where of 1 week to understand the process)
3. Under no circumstance a point could be added to the current sprint
4. If a real emergency arise stop the sprint, do a retrospective and start a new sprint.
5. Take time for retrospection (time to share though about the last sprint and to analyze it)
6. In a sprint insert at least one task to improve the process (often added to the backlog in the retrospective); it's good for the troop morale and in the end of the day you will be in your way to have less emergency.
7. TIME BOXED daily stand up meeting
For the estimation, usually the more you estimate the more precise you become. What is good with Scrum is that each programmer pick his task and can set a new estimate if he think it's not realist. And if you still have some issue with estimation, let your team find a solution... you can be surprise with what they come with.
For the 2 weeks fix. If it's the original estimation, cut it in smaller pieces. If you did an more optimistic estimation (let say 2-3 days), the issue should rise as a blocker in the stand up meeting. Maybe somebody else have ideas about how to fix it. You can decide to do some pair programming to find a solution. Sometime, just to describe the bug to another programmer help a lot to debug. You can also delay it after other task in the sprint. The idea is to deliver fully functional tasks. If you don't have time to fix it in full and to demonstrate it, even if your at 90% done (yeah! we know what it means), you consider it not done in the sprint. In the next sprint you will be able to address it with the correct time estimation.
Finally, from what I understood, Scrum is more about having "tools" to improve your process. You start with something simple. You do small iteration. In each iteration you have a FIX TARGET to complete instead of an infinite bug list. Because you pick your tasks from the list in the planning (oppose to being assign them), you become more engage to deliver it. With the stand up meeting, you meet your pair every day with your TODO list... you want to respect your engagement you did the day before. With the iteration, you take the time to talk to each other and to identified what's going good and what's should be improve. You also take action to improve it and to continue doing what's working. Don't be afraid to change anything, even what I said ;) / even any basic of the Scrum itself... The real key is to adapt Scrum to what your team need to be happy and productive. You will not see it after one iteration, but many of them....
I'd highly recommend looking at what value sprints/iterations would give you. It makes sense to apply them when there are enough tasks to do that they need to be prioritized and when your stakeholders need to know roughly when something will be done.
In this case I'd highly recommend to combine three approaches:
schedule as many incoming tasks as possible for the next iteration earliest
use Yesterday's Weather to plan for how much buffer you need to plan for to deal with tasks that have to be dealt with immediately
use very short sprints, so as to maximize the number of tasks that can wait until at least the start of the next iteration
In fact that is true vor every Agile/Scrum project, as they are in maintenance mode - adding to an existing, working system - from iteration 2.
In case that iterations don't provide enough value, you might want to take a look at a kanban/queuing system instead. Basically, when you are finished with a task, just pull the next task from a prioritized task queue.
In my opinion it depends on how often you have a 'real' release. In our specific case we have one major release each year and some minor releases during the year.
This means that when a sprint is done, it's not immediately deployed on our production server. Most of the times a few sprints will take place before we have our complete 'project' finished. Of course we demo our sprints and we deploy our sprints on our testing-server. The 'project' in his totality will undergo some end-to-end testing and will finally be deployed to our production servers -> this is a minor release. We may decide that we will not deploy it immediately to our production server, when it's for instance dependant on other products/projects that need to be upgraden first. We then deploy this in our major release.
But when issues arrise on our production server, immediate action may be required. So, no time to ask a product owner for the goal or importance (if we even have one for sunc an issues) because it blocks our clients from working with our application. In such an urgent cases, these kind of issues will not be put into a Product Backlog or sprint but are pure maintenance tasks to be solved, tested and deployed as soon as possible and as an individual item.
How do we combine this with our Sprint? In order to keep our Team members focused on the sprint, we decide to 'opt in - opt out' our people into the Team. This means that one or more people will not be part of a Team for a certain Sprint and can focus on other jobs like these urgent fixes. The next Sprint this person will again be part of the Team and someone else will be responsable for emergency calls.
Another option could be that we foresee like 20% of the time in a Sprint for 'unplanned tasks' but this will give wrong indication about the amout of work we can do in a Sprint (we will not have the same amount of urgent fixes during each sprint). We also want our Team members to be focused on the Sprint and doing these urgent fixes in a Sprint will distract our team members. 'context-switching' will also mean time-loss and we try to avoid that.
It all depends on your 'environment' and on how fast urgent issues should be fixed.
Treat all "bug fixes" that don't have a story as new code. Estimate them, and work them as normal. By treating them as new stories you will build up a library of stories and tests. Only then can you begin to pin the behavior of the application.
Take a look at Working Effectively with Legacy Code by Michael Feathers. Here is a link to an excerpt. http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf
-Jason
I have had some success by recognizing that some percentage of a Sprint consists of those unplanned "fires" that need to be addressed. Even in a short sprint, these can happen. If the development team has these responsibilities, then during sprint planning, only stories are committed to the sprint that allow enough headroom for these other unplanned activities to occur and be handled as needed. If during the sprint, no "fires" ignite, then the tam can pull in stories from the top of the backlog. In many ways, it becomes a queue.
The advantage is that there is commitment to the backlog. The disadvantage is that there is this hole in capacity that can be communicated as an opportunity to drag the team into non-critical tasks. Velocity can also vary widely if that gap in capacity is filled with unplanned work that is not also tracked.
One way I have gotten around part of this is to create a "support" story that fills out the rest of that capacity with a task that represents the available hours the team can allocate to support activities. As support situations enter the sprint that cannot be deferred to the backlog, then a new story is created with tasks, and the placeholder support story and tasks are re-estimated to account for the new injection. If support incidents do not enter the sprint, then this support story is reduced as backlog items come in to fill the unused capacity.
The result is that sprints retain some of the desired flow and people feel like they aren't going to get burned when they need to take on supporting work. Velocities are more consistent, burndowns track normally, but there is also a record of the support injections that happen each sprint. Then during the sprint retrospective we can evaluate the planned and unplanned work and if action needs to be taken differently next sprint, we can do so. If that means a new sprint duration, or a pointed conversation with someone in the organization, then we have data to back up the new decisions.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Need some advice on working out the team velocity for a sprint.
Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem.
Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test.
The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days.
How do you guys deal with this sort of situation?
We struggle with this issue too.
Here is what we do. When we add up capacity and tasks we add them up together and separately. That way we know that we have not exceeded total time for each group. (I know that is not truly scrum, but we have QA folks that don't program and so, to maximize our resources, they end up testing and we (the developers) end up deving.)
The second think we do is we really focus on working in slices. We try to pick tasks to get done first that can go to the QA folks fast. The trick to this is that you have to focus on getting the least testable amount done and moved to the testers. If you try to get a whole "feature" done then you are missing the point. While they wait for us they usually put together test plans.
It is still a work in progress for us, but that is how we try to do it.
Since Agile development is about transparency and accountability it sounds like the testers should have assigned tasks that account for their velocity. Even if that means they have a task for surfing the web waiting for testing (though I would think they would be better served developing test plans for the dev team's tasks). This will show the inefficiencies in your organization which isn't popular but that is what Agile is all about. The bad part of that is that your testers may be penalized for something that is a organizational issue.
The company I worked for had two separate (dev and qa) teams with two different iteration cycles. The qa cycle was offset by a week. That unfortunatey led to complexity when it came to task acceptance, since a product wasn't really ready for release until the end of the qa's iteration. That isn't a properly integrated team but neither is yours from the sound of it. Unfortunately the qa team never really followed scrum practices (No real planning, stand up, or retrospective) so I can't really tell if that is a good solution or not.
FogBugz uses EBS (Evidence Based Scheduling) to create a probability curve of when you will ship a given project based on existing performance data and estimates.
I guess you could do the same thing with this, just you would need to enter for the testers: "Browsing Internet waiting for developers (1 week)"
This might be slightly off what you were asking, but here it goes:
I really don't like using velocity as a measure of how much work to do in the next sprint/iteration. To me velocity is more of a tool for projections.
The team lead/project manager/scrum master can look at the average velocity of the last few iterations and have a fairly good trend line to project the end of the project.
The team should be building iterations by commitment as a team. Keep picking stories until the iteration has a good amount of work that the team will commit to complete. It's your responsibility as a team to make sure you aren't slacking by picking to few and not over committing by picking to many. Different skill levels and specialties work themselves out as the team commits to the iteration.
Under this model, everything balances out. The team has a reasonable work load to accomplish and the project manager has a commitment for completion.
Make the testers pair-program as passive peers. If they have nothing to test, at least they can watch out for bugs on the field. When they have something to test, in the second part of the week, they move to the functionality/"user story compliance" level of testing. This way, you have both groups productive, and basically the testers "comb" the code as it goes on.
Sounds to me like your system is working, just not as well as you'd like. Is this a paid project? If it is, you could make pay be a meritocracy. Pay people based on how much of the work they get done. This would encourage cross discipline work. Although, it might also encourage people to work on pieces that weren't theirs to begin with, or internal sabotage.
Obviously, you'd have to be on the lookout for people trying to game the system, but it might work. Surely testers wouldn't want to earn half of what devs do.
First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint.
I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested).
Talk to your scrum master, otherwise there will always be problems with velocity and estimation.
Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal.
You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work.
In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints.
I have heard about this solutions but I had never worked this way.
In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment:
- go through requirements for features to be implemented
- design test scripts (high level design)
- prepare draft test cases
- go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data)
- wrap up everything in test suites
- communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature)
- you can make any necessary changes to test cases as feature evolves
Then when feature is complete you:
- flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears)
- perform tests
- rise issues
.
Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests.
If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint.
Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other.
The solution is never black and white as each sprint may contain stories that require testing and others that dont. There is no problem in Agile of apportioning a tester for example; for 50% of their time during a sprint and 20% in the next sprint.
There is no sense in trying to apportion a tester 100% of their time to a sprint and trying to justify it. Time management is the key.
Testers and developers estimate story points together. The velocity of a sprint is always a combined effort. QA / testers cannot have their separate velocity calculations. That is fundamentally wrong.
If you have 3 dev and 2 testers and you inclue the testers capacity and relate it to your output then the productivity will always show low. Testers take part in test case design, defect management and testing which is not directly attributed to Development. You can have efforts tracked against each of these testing tasks but cannot assign velocity points.