Sprint velocity calculations [closed] - project-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Need some advice on working out the team velocity for a sprint.
Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem.
Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test.
The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days.
How do you guys deal with this sort of situation?

We struggle with this issue too.
Here is what we do. When we add up capacity and tasks we add them up together and separately. That way we know that we have not exceeded total time for each group. (I know that is not truly scrum, but we have QA folks that don't program and so, to maximize our resources, they end up testing and we (the developers) end up deving.)
The second think we do is we really focus on working in slices. We try to pick tasks to get done first that can go to the QA folks fast. The trick to this is that you have to focus on getting the least testable amount done and moved to the testers. If you try to get a whole "feature" done then you are missing the point. While they wait for us they usually put together test plans.
It is still a work in progress for us, but that is how we try to do it.

Since Agile development is about transparency and accountability it sounds like the testers should have assigned tasks that account for their velocity. Even if that means they have a task for surfing the web waiting for testing (though I would think they would be better served developing test plans for the dev team's tasks). This will show the inefficiencies in your organization which isn't popular but that is what Agile is all about. The bad part of that is that your testers may be penalized for something that is a organizational issue.
The company I worked for had two separate (dev and qa) teams with two different iteration cycles. The qa cycle was offset by a week. That unfortunatey led to complexity when it came to task acceptance, since a product wasn't really ready for release until the end of the qa's iteration. That isn't a properly integrated team but neither is yours from the sound of it. Unfortunately the qa team never really followed scrum practices (No real planning, stand up, or retrospective) so I can't really tell if that is a good solution or not.

FogBugz uses EBS (Evidence Based Scheduling) to create a probability curve of when you will ship a given project based on existing performance data and estimates.
I guess you could do the same thing with this, just you would need to enter for the testers: "Browsing Internet waiting for developers (1 week)"

This might be slightly off what you were asking, but here it goes:
I really don't like using velocity as a measure of how much work to do in the next sprint/iteration. To me velocity is more of a tool for projections.
The team lead/project manager/scrum master can look at the average velocity of the last few iterations and have a fairly good trend line to project the end of the project.
The team should be building iterations by commitment as a team. Keep picking stories until the iteration has a good amount of work that the team will commit to complete. It's your responsibility as a team to make sure you aren't slacking by picking to few and not over committing by picking to many. Different skill levels and specialties work themselves out as the team commits to the iteration.
Under this model, everything balances out. The team has a reasonable work load to accomplish and the project manager has a commitment for completion.

Make the testers pair-program as passive peers. If they have nothing to test, at least they can watch out for bugs on the field. When they have something to test, in the second part of the week, they move to the functionality/"user story compliance" level of testing. This way, you have both groups productive, and basically the testers "comb" the code as it goes on.

Sounds to me like your system is working, just not as well as you'd like. Is this a paid project? If it is, you could make pay be a meritocracy. Pay people based on how much of the work they get done. This would encourage cross discipline work. Although, it might also encourage people to work on pieces that weren't theirs to begin with, or internal sabotage.
Obviously, you'd have to be on the lookout for people trying to game the system, but it might work. Surely testers wouldn't want to earn half of what devs do.

First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint.
I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested).
Talk to your scrum master, otherwise there will always be problems with velocity and estimation.
Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal.
You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work.
In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints.
I have heard about this solutions but I had never worked this way.
In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment:
- go through requirements for features to be implemented
- design test scripts (high level design)
- prepare draft test cases
- go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data)
- wrap up everything in test suites
- communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature)
- you can make any necessary changes to test cases as feature evolves
Then when feature is complete you:
- flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears)
- perform tests
- rise issues
.
Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests.
If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint.
Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other.

The solution is never black and white as each sprint may contain stories that require testing and others that dont. There is no problem in Agile of apportioning a tester for example; for 50% of their time during a sprint and 20% in the next sprint.
There is no sense in trying to apportion a tester 100% of their time to a sprint and trying to justify it. Time management is the key.

Testers and developers estimate story points together. The velocity of a sprint is always a combined effort. QA / testers cannot have their separate velocity calculations. That is fundamentally wrong.
If you have 3 dev and 2 testers and you inclue the testers capacity and relate it to your output then the productivity will always show low. Testers take part in test case design, defect management and testing which is not directly attributed to Development. You can have efforts tracked against each of these testing tasks but cannot assign velocity points.

Related

Adding more structure to our development processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work with a small team (4 developers) writing firmware and software for our custom hardware. I'm looking into better ways to organise the team and better define processes.
Our Current Setup
Developers are generally working on 2-3 projects at a time.
We have projects that work in an iterative sort of way, where a developer is in regular contact with the customer and features are slowly added and bugs fixed.
We also have projects with fixed delivery dates, and with long lead times, final hardware might appear only a few weeks before delivery. The fixed projects are usually small changes to an existing product or implementation and the work is somehow intermingled.
We are also moving from consulting to products, so we are occasionally adding features that we think will add value, at our own cost.
The Issues
We have a weekly meeting where proportions of time are allotted to each project. "Customer A wants to test feature X next week", so the required time is allotted. "Customer B is having issues with Y, could developer P drive down and take a look?", etc.
When we're busy, these plans are very loosely followed. Issues arise and lower priority stuff gets pushed back. Sometimes, priorities are not clear to developers so there is friction when priorities appear to change. The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Developers are generally working on 2-3 projects at a time.
Multitasking is incredibly inefficient. Switching the brain from one task to another requires time for the gears to change over.
When we're busy, these plans are very loosely followed.
Then why create plans at all?
Is it all possible to dedicate just one developer to one task / product / customer? So developer P is the only one who talks to customer B? (Certainly the developer would need to document exactly what he's doing in case he gets hit by a bus, but he should be recording issues and roadmaps anyway.)
The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
If there had been only one developer on project Z anyway, he wouldn't have been distracted by customer A's problems.
Don't think in terms of a pool of developers serving a pool of customers, think of one developer for a given customer. (This can make vacation planning a little tougher, but if you're constantly pulling all-nighters, you aren't spending enough time away from the office anyhow.)
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Aren't we all.
"Customer A wants to test feature X next week", so the required time is allotted.
Allotted by whom?
Do you create your own schedules? If not, the only response to management creating a schedule for you is all-nighters.
Realistic non-all-nighter schedules will bother management. Until you can prove that your customers want a better schedule with fewer all-nighters, there isn't much you can do.
The only way to reduce the all-nighters is to get stuff done sooner. But if the hardware doesn't arrive sooner, there isn't much you can do, is there?
Two thoughts: drive quality and improve estimates.
I work in a small software shop that produces a product. The most significant difference between us and other shops of a similar size I've worked in a is full time QA (now more than one). The value this person should bring on day one is not testing until the tests are written out. We use TestLink. There are several reasons for this approach:
Repeating tests to find regression bugs. You change something, what did it break?
Thinking through how to test functionality ahead of time - this is a cheek-by-jowl activity between developer and QA, and if it doesn't hurt, you're probably doing it wrong.
Having someone else test and validate your code is a Good Idea.
Put some structure around you estimation activity. Reuse a format, be it Excel, MS Project or something else (at least do it digitally). Do so, and you'll start to see patterns repeating around what you do building software. Generally speaking include time in your estimates for thinking about it (a.k.a. design), building, testing (QA), fixing and deployment. Also, read McConnell's book Software Estimation, use anything you think is worthwhile from it, it's a great book.
Poor quality means longer development cycles. Most effective step is QA, short of that unit tests. If it were a web app I'd also suggest something like Selenium, but you're dealing with hardware so not sure what can be done. Improving estimates means the ability to attempt forecasting when things will suck, which may not sound like much, but knowing ahead of time can be cathartic.
I suggest you follow the Scrum Framework. Create a Scrum Environment with an enterprise product. Have product Teams working on the features for their own individual products, which is a part of the combined enterprise product. If you have the resources have a production/issues support and infrastructure Scrum Team. If the issues are coming your way too quickly, have the infrastructure Team try following Kanban or Scrumban.
The Scrum Framework in itself will solve most of your problems if adopted properly.

Is there such a thing as a process smell? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We're generally familiar with code smells here, but just as damaging if not more so are when the business side of things - as much as it falls within our domain - is going wrong.
As examples, the inverse of anything on the Joel test would be considered a major process smell (i.e. no source control, no testers) but those are obvious ones and the point of "smells" is that they're subtle and build into something destructive. I'm looking for granularity here.
To start off with here's a couple (which can be turned into a list as the answers come in)
Writing code before you have a signed contract with the client
Being asked for on-the-fly estimates ("just a rough one will do") for anything which will take more than a day (a few hours?)
Ancient cargo-cult wisdom prevails (personal example - VisStudio sourcesafe integration is banned)
You've stopped having non-project specific group meetings (or lack any similar forum for discussion)
So what are some other process smells, and just how bad are they?
The book "Antipatterns" by William J. Brown et. al. has a bunch of project-related smells. They aren't always disasters in progress; mitigating circumstances exist for just about any smell.
The Portland Pattern Repository also has a page on Antipatterns, covering many of the same topics as the "Antipatterns" book. Visit http://c2.com/cgi/wiki?AntiPatternsCatalog and scroll down to "Management Antipatterns." A few examples:
Analysis Paralysis - a team of otherwise intelligent and well-meaning analysts enter into a phase of analysis that only ends when the project is cancelled.
Give Me Estimates Now - a client (or PointyHairedBoss) demands estimates before you have enough data to deliver it. You accept the "challenge" and give out of the head estimates (i.e. guesses). The client/boss then treats the estimate as an iron-clad commitment.
Ground Hog Day Project - meetings are held which seem to discuss the same things over and over and over again. At the end of said meetings, decisions are made that "something must be done."
Design By Committee - Given a political environment in which no one person has enough clout to present a design for a system and get it approved, how do you get a design done? Put together a big committee to solve the problem. Let them battle it out amongst themselves and finally take whatever comes out the end.
Collect them all! :-)
Back Dating - being given an end date and then told what needs to get done
Inverse QA Coverage - QA focuses on the non-essential items (because that's all they know how to test)
Environmental Alignment Issues - the various environments (Dev, Test, Staging, Production) are not in sync for code and data - therefore any testing prior to production is invalid
Delivery Date Detachment - no one believes in the end date because: it was made up to begin with and 100% of prior projects never met their delivery dates
Old Grumpy Code - old code is feared because there's no desire to refactor
the evil pm triangle (scope, cost, resources and/or quality) - to adjust the project you need to add people, reduce quality, reduce scope, etc....once a project is in motion, most changes (even reduction in scope) will increase time and cost and reduce quality..once the train tracks are down, it's tough just hanging a left turn
One smell I have a real problem with (because I work with it): Not ditching tools, dev software, methodologies, or anything else that doesn't work.
Many times, there is one (or more than one) piece of software that clearly, blatantly, doesn't work and likely interferes with the development process, but which a project manager simply refuses to replace/upgrade "because it would cost too much {time, money, whatever} to replace."
Edit: This also extends to machines and other infrastructure too (examples: a build server that takes an hour to do a two-minute build, or a version control system - ahem CVS - that takes 15 minutes to find out whether there have been any updates on a 50MB source tree).
Shipping a prototype - "we'll productize it later"
I suggesting checking out the Organizational section of the wikipedia page on Anti-Patterns. The I've had to deal with are 'Crisis mode' and the 'on-the-fly estimates' you mentioned.
You haven't had a post project review....when the project ended 6 months ago.
Some smells I have seen:
Optimistic management, but they can't pay your salary this month. This is real bad. I left the company in time but it died a few months later.
Extreme fanatic team building sessions. Focussing on how great the company is. But in the end it all goes down.
Good new people are laid of because they tried to change the process. Real shame. I have seen some people that really tried to improve the company, but old habits never dies so it often ends in a big desillusion.
The boss is always right mentality...
There is more but I won't spoil the fun for others.
Changes to process are made with no thought to timing or current deliverables, then immediately reversed when deliverables turn up late due to instituting new process.
Someone goes on medical leave and the team as a result is behind trying to pick up that person's work as well as their own and when the managers or clients or client sales reps are told things will be delayed as a result, they are only concerned about when things will happen and can you work nights and weekends in the meantime and never even ask about the person with the emergency and how he or she is doing.
When overtime for low level people is expected but the people who want this urgently leave on time and are not availble to answer questions. Or when they make you work overtime to be ready by 8 am and then don't look at it on QA for three more days. Hello I could have done it by then in regular hours.
Delivery of needed files (for data import for example) or information minutes before the due date and then blaming developers when due date is not met.
What I call: NIH (Process edition), a.k.a. Choose your own adventure.
Evidence of this:
you spend endless meetings debugging the process. And refactoring it.
nothing really gets done, because no one knows what they should be doing.
I guess this is an antipattern, rather than a smell.
Interesting question and even more interesting answers. Thanks for those.
I have been in almost all roles of software development (Developer, QA, Tech Lead, Project Manager - even client) and I can safely list the following smells
How quickly does the team react to new inputs (and how accepting are they of change)
How many layers does it pass through to get things done (beaurucracy)
How clear are the features/tasking - and are the goals SMART and do we have any KPIs.
How serious is the team working on the project about it
Is the team meeting regularly (read daily) to discuss achievements, goals and issues.
Most important, however, and the most evident (to a good nose) is the hygiene level of the project management tool being used (excel sheet, piece of paper, agile tools, email, whatever in whichever methodology you use). That is the first thing I notice while evaluating projects.
Do I know where the project stands at the moment?
Can I tell (Without asking the team) what needs to be done next?
Can I tell what the team is working on right now?
Can I tell when the next release is and if its achieveable?
Can I tell if client is getting regular updates?
Can I tell if client is giving approvals and if his feedback is taken care of in time?
Can I tell just from looking the load distribution of the project on the engineers?
Obviously, all this is well covered if you pick any modern Agile methodology, but depending on the market and kind of work, the mileage may vary. So keeping myself methodology agnostic, this is the bare minimum set of smells that should be rid of.

What steps make up your web development process and how much time does each phase take?

Let's say you work 100 days on a project. How many days would each phase of your process (requirements analysis, specification, etc.) take?
I'm interested also in the ratio of specific activities in every phase, such as writing tests, back-end coding, front-end coding, visual design, database design etc.
Many thanks!
EDIT:
Just to make things clear, I'm not talking about web site design - I'm interested in more "serious" web development, such as custom business web applications. I know, everything depends on the specifics of each project, however I suppose the ratios could be roughly the same from project to project.
EDIT2:
As Helen correctly remarked, this question is really hard to answer, since projects can be so different and so can be teams. To make it more specific, let's say you have a team of four developers - two of them for back-end work, one for front-end programming and one for design & html/css coding (one member of the team acts as a project manager) and you are supposed to develop StackOverflow.com site.
We're running agile scrum projects, so we typically run all these activities in parallel. So while I cannot answer your exact question, I can give you some ideas of the ratios we have found to be effective:
4-5 developers can be served by one client side programmer (html/css), one on-team tester and one interaction designer (works with the customer to design wireframes). A team like this typically needs a 50% graphic designer for most applications, but your mileage may vary there. Then there's project manager, and there's all sorts of other stakeholders that are not part of the core development team.
In the development team you normally have a couple of developers who are sharp on client side development and similar amount on the back-end. These staffings also tend to reflect resource usage ;) Testing is an integral part of development as well as the efforts of the on-team tester.
Your local conditions may of course vary, but these numbers are just to give you some idea.
Step 1: denial
Step 2: anger
Step 3: acceptence
The time each step takes is different for all team members involved.
I agree with everyone who started along the lines of, "It depends on the project".
On the other hand, I do think that there's a consistent process that can be followed; only tweaking the percentages of effort to match the project:
Typically, I follow these basic principles:
Discovery - determine the feature/functionality of the system. The easiest (and worst) thing to do is accept what's being asked for and go with it.
For example, "building stackoverflow.com" is a pretty broad request - and is actually the wrong request. The project has to start with "I need an online location where programmers can collaborate".
Based on that one thing you're trying to solve, you can drill down into all the details you want - like how a question will be answered, asked, rated, etc.
I think this is the most crucial step! output = requirements/specification; 20/100 days can safely be spent here
Wireframing - this is where I like to use basic HTML pages, paint.NET, or even construction paper and glue to mock up every aspect of the final site's functionality. I like using paper because it's easy to make changes :)
Going through this process forces you to consider just about every aspect of the user experience and gives you the flexibility to add/remove features and adjust your requirements as needed. Your customer has some input to changes before you've committed a bunch of time to writing code.
An added bonus is that you get to use paste :)
10/100 days
Implementation/Testing - I group implementation AND testing together because I think it's short sighted to develop a whole site without testing along the way. (At the same time, you still need step 4). This is the part where the rubber hits the road. If you've handled your client properly in steps 1 and 2, you'll be pleasantly writing your code without any last-minute changes in scope (or at least very few). I try to follow a general set of steps for implementation:
data develpment (db design, query design, sample data setup)
site framework (set up your environment(s); production, dev, and qa)
front-end structure (css, standard classes, standard html structures)
start coding!
55/100 days
SQA - hopefully you can get some non-involved parties/end users to test the app out as you go. Test plans need to be developed to ensure that it's clear what should be test and the desired outcomes. I like using real people for testing the front end; automated tools are fine for code/backend modules
This is a good time to let the client see things progressing - they should have very limited ability to make changes at this point.
10/100 days
Delivery/Post Production honeymoon - you've built it, tested it, and you're ready to deploy. Get the code out there and let the client play. You shouldn't have much to tweak; but I'm sure there will be some adjustments.
5/100 days
Some of this seems idealistic; but you'd be surprised how quickly you can ship your application when you've got a well-reviewed, well-created specification.
It is impossible to give a meaningful answer to this question. The ratios will not be even roughly the same from project to project. For some projects the visual design barely matters (as long as it more or less works) but the database is critical and complex. For others, it's all about providing a smooth user experience with lots of AJAX goodies and other eye candy, but the underlying data is trivially simple to organise and store.
It sounds like you're thinking mainly of one-man projects, but for larger teams the size and setup of the team also matters, as well as your development process.
Probably we are an unusual development shop. Our whole existence (at least during work hours) is requirements gathering. Developers are required to als work in every other department. Be it answering the phone in after sales support (and fighting the CRM Software), driving a forklift in the warehouse (and fighting the mobile terminals) or packing crates in the shipping station (and fighting confusing delivery notes).
When we tackle a new project "requirements gathering" was usually an afternoon on the whiteboard, usually with somebody from the department which used the new software most. There was little upfront design and lots of re-factoring and rewriting. We where very happy with this and generated about 100.000 Lines of Code which are well-architected and stable.
But it seems we are hitting a complexity barrier now. This is very frustrating because moving to "heavier" processes than hack and slay coding results in a dramatic loss of productivity.
Just to be clear - you're basically time-boxing your work - which is directly relational to having a fixed budget (4 developers x $x per day x 100 days - assuming it's 100 day duration and not 100 day work effort). If that's the case then, on avg. you would spend:
25% up front planning which includes scope, spec development, technology approach, logistics (computers, servers, work space), resource gathering.
50 % development - test case (TDD) development, schema design and implementation, front end coding, backend coding, deployment
15% Testing - basic break/fix activities
10% overhead/management - project management, communication and coordination.
Very rough est. - many 'areas' to consider including resource skills/maturity, technology being used, location of resources (one room or across the country), level of requirements, etc. The use of 'skill specific' resources would make planning more difficult since you might need the resources to perform multi-roles - one suggestion would be to get 3 generalists who can help spec/design/plan and one tech wizard that would ensure the platform and database are setup correctly (key to success once you have good as possible requirements)
That is truely a tricky questions. To give an somewhat exact estimation on the ratio of time you need to apply for each step - if we take a classical approach of design, implement, test and deploy - one needs to know the specification and the expertise of the project members.
If you take McConnell's book "Software Estimation" (which I highly recommend) you have a chapter in their about historical data and how to use that for future projects.
I do not think, that you have exact historical data of former projects - well - I don't have one - although I always remind me of recording them ;)
Since the smallest failures or uncertainties in the design-phase are the most crucial ones take a lot of time to specify what you want to do. Make sure that everyone understands it the same way and write it down.
To cut a long story short - I'd put 50% - 75% of the time in the design (if 75% this would include a prototype to clear all uncertainties) and equal parts in implementation and test.
If you are using TDD you would mix design and test a bit so you would take a bit of the design-phase and add it to the test-phase.
Building a list of client needs 1-2 days
This depends on the client and what they need and how well prepared they are.
Designers do initial sketch ups 2-3 days
A bit of branching happens here as 2 and 3 will happen concurrently.
Programers build any functionality not already in our existing system 1day - 1 month
This depends on the client, and what they need more then most anything else.
This also will only produce functional code.
Repeat steps 2&3 until the client is happy with the general feeling of what we have.
Could be 1 iteration could be 100 (not likely if by 10 we couldn't make them happy we'd send them somewhere else.
Build final design 1-5 days
This is the final, no error, valid CSS/HTML/JS, everything is cross browser ect
Build final functionality 2-3 days
This code is "perfect" it works 100%, it is pretty, there are no known bugs, and the developers are happy to send it
This and Step 5 happen concurrently.
Deploy 10 seconds.
Then 2weeks, 2 months and 6 months later we do a review to make sure there have been no problems.
So if you skip the review this usualy takes 8-20 days, IDK how you'll work that into 100 days.
If we are just building an application (or extending one) for a client we would spend 2-3 defining EXACTLY what they need then however long it takes to build it.

How do you apply Scrum to maintenance and legacy code improvements? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As the title suggest...
How can I apply a scrum process to anything that doesn't work on new code and can be estimated to some degree?
How can I apply a scrum process to maintenance and emergency fixes (which can take from 5 minutes to 2 weeks to fix) type of environment when I still would like to plan to do things?
Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?
Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?
You're using the wrong process for this environment. What you need is a stack/queue management process which is separate to your planned/estimated SCRUM development process.
The reason for this is simple and twofold:
As you mention in your post, it is
often very difficult to estimate
maintenance tasks, especially where
legacy systems are involved.
Maintenance tasks in general and
legacy systems specifically have a
tendency to involve 'curly'
problems, or have a long 'tail',
where one seemingly simple fix
requires a slightly more difficult
change to another component, which in
turn requires an overhaul of the
operation of some subsystem, which in
turn... you get the point.
Quite often when dealing with
maintenance tasks, by the time you
have finished estimating, you have
also finished solving the problem.
This makes the process of estimation
redundant as a planning tool. Those who insist on dividing estimation from solving the problem
for maintenance tasks are
simply adding unnecessary overhead.
Put simply, you need a queueing system. It will have these components:
A 'pool' of tasks which have been
identified as requiring attention.
Newly raised items should always go
into the pool, never the queue.
A process of moving these tasks out of
the pool and onto the queue. Usually a combination of business/technical knowledge is required.
A queue of tasks which are clearly ordered
such that developers responsible for
servicing the queue can simply pick
from the front of it.
A method for moving items around in the queue
(re-prioritising). To allow 'jumping the queue' for critical/emergency items.
A method for delivering the completed items which does not interrupt servicing the queue. This is important because the overhead of delivering maintenance items is usually significantly lower than development work. You don't want your maintenance team sitting around for a day waiting for the build and test teams to give them the ok each time they deliver a bugfix.
There are other nuances to queue management, but getting these in place should set you on the right path.
If you have that much churn in your environment, then your key is going to be shorter iterations. I've heard of teams doing daily iterations. You can also move towards a Kanban type style where you have a queue which has a fixed limit (usually very low, like 2 or 3 items) and no more items can be added until those are done.
What I'd do is try out one week iterations with the daily stand-ups, backlog prioritization, and "done, done". Then reevaluate after 5 or 6 weeks to see what could be improved. Don't be afraid to try the process as is - and don't be afraid to tweak it to your environment once you've tried it.
There was also a PDF called Agile for Support and Operations in 5 minutes that was recently posted to the Scrum Development list on Yahoo!
No one said that backlog items have to be new code. Maintenance work, whether bug fixes, enhancements or data fixes can be put into the Product Backlog, estimated and prioritized. This is actually one of the biggest benefits of using Scrum - no more arguments with users about whether something is a bug fix or an enhancement.
With Waterfall, there's a tacit understanding that bugs are the responsibility of the developers. Somehow, they are on the hook to fix them without impacting the development of new code and features. So they are "free" to the users, but a massive inconvenience to the developers.
In Scrum, you recognize that all work takes time. There is no "free". So the developers freely accept that something is a bug but it still goes into the Product Backlog. Then it's up to the Customer to decide if fixing the bug is more important than adding new features. There are some bugs that you can live with.
As the title suggest... How can I
apply a scrum process to anything that
doesn't work on new code and can be
estimated to some degree?
On the contrary, I've heard teams find adopting Scrum easier in the maintenance phase.. because the changes are smaller (no grand design changes) and hence easier to estimate. Any new change request is added to the product backlog, estimated by devs and then prioritized by the product owner.
How can I apply a scrum process to
maintenance and emergency fixes (which
can take from 5 minutes to 2 weeks to
fix) type of environment when I still
would like to plan to do things?
If you're hinting at fire-fighting type of activity, keep a portion of the iteration work quota for such activities.. Based on historical trends/activity, you should be able to say e.g. we have a velocity of 10 story points per iteration (4 person-team 5day-iteration). Each of us spend about a day a week responding to emergencies. So we should only pick 8 points worth of backlog items for the next iteration to be realistic. If we don't have emergency issues, we'll pick up the next top item from the prioritized backlog.
CoryFoy mentions a more dynamic/real-time approach with kanban post-its in his response.
Basically, how do I overcome unplanned
tasks and tasks that are very
difficult to estimate with the scrum
process? or am I simply applying the
wrong process for this enviroment?
AFAIR Scrum doesn't mandate an estimation technique.. Use one that the team is most comfortable with.. man days / story points / etc. The only way to get better at estimation is practice and experience I believe. The more the same set of people sit together to estimate new tasks, the better their estimates get. In a maintenance kind of environment, I would assume that it's easier to estimate because the system is more or less well known to the group. If not, schedule/use spikes to get more clarity.
I sense that you're attempting to eat an elephant here.. I'd suggest the following bites
Working Effectively with Legacy code
Agile Estimating and Planning
Agile Project Development with Scrum
Treat all fixes and improvements as individual stories and estimate accordingly. My personal view is that things that take less then 10-15 minutes to fix should just be done straight away. For those that take longer, they become part of the current 'fix & improvement' iteration cycle. As with estimating regular requirements, you take a best guess as a team. As more information comes to light, and the estimates are off adjust the iteration and upcoming sprints.
It's hard to apply a iteration cycle to fixes and improvements as more often then not, they prevent the system from working as they should and the 'business' puts pressure for them to go live asap. At this point it may work out better moving to a really short iteration cycle, like one or two weeks.
You ask about how to use a process for emergencies. Try to reserve the word "emergency" for things that require hacking the code in the production environment with the users connected at the same time. Otherwise, stakeholders are likely to abuse the word and call an emergency on anything they would like to have really fast. Lack of process does not mean out of control: somebody must be accountable for declaring the emergency, and somebody (best practice is somebody else) must authorize the changes outside the normal process and then take responsibility for it.
Other than that, the suggestion of using each iteration to complete a number of fixes and improvements is probably the best way to go.
This depends a lot on the application life cycle. If this is a 'Sunset' application to be retried soon, of course, the focus would only be on fixing the top priority bugs.
If the product is 'Mature' and has a roadmap and is continuing to evolve you would have fixes and enhancements to take care of. There's a lot of impetus to keep the design clean and evolve by refactoring. This may mean periodic minor and major releases [except for eFixes - emergency fixes/hotfixes]. You can pratice agile to your hearts delight as enhancement and fixes could be story boarded and made part of your Sprint backlog. The entire list would make your Product Backlog.
Bottom line: If you want to refactor and keep your application design clean [programmers tend to take shortcut if the focus is exclusively bug fixing], it could only happen with a 'living' application. One that is evolved and updated. And agile is natural fit.
Even if you have only fixes (that it's a Turing Complete ;) or Sunset application), it helps if they can all be rolled into a sprint and rolled into production end of each sprint. If the fixes need be rolled into production as and when they're fixed, it's much more difficult to apply Scrum.
We have applied scrum in this context.
Some of the keys of success.
1. Everyone in the enterprise buy the scrum idea ( this is crucial for success )
2. Sprint of about 2 weeks (our 2-3 firsts sprint where of 1 week to understand the process)
3. Under no circumstance a point could be added to the current sprint
4. If a real emergency arise stop the sprint, do a retrospective and start a new sprint.
5. Take time for retrospection (time to share though about the last sprint and to analyze it)
6. In a sprint insert at least one task to improve the process (often added to the backlog in the retrospective); it's good for the troop morale and in the end of the day you will be in your way to have less emergency.
7. TIME BOXED daily stand up meeting
For the estimation, usually the more you estimate the more precise you become. What is good with Scrum is that each programmer pick his task and can set a new estimate if he think it's not realist. And if you still have some issue with estimation, let your team find a solution... you can be surprise with what they come with.
For the 2 weeks fix. If it's the original estimation, cut it in smaller pieces. If you did an more optimistic estimation (let say 2-3 days), the issue should rise as a blocker in the stand up meeting. Maybe somebody else have ideas about how to fix it. You can decide to do some pair programming to find a solution. Sometime, just to describe the bug to another programmer help a lot to debug. You can also delay it after other task in the sprint. The idea is to deliver fully functional tasks. If you don't have time to fix it in full and to demonstrate it, even if your at 90% done (yeah! we know what it means), you consider it not done in the sprint. In the next sprint you will be able to address it with the correct time estimation.
Finally, from what I understood, Scrum is more about having "tools" to improve your process. You start with something simple. You do small iteration. In each iteration you have a FIX TARGET to complete instead of an infinite bug list. Because you pick your tasks from the list in the planning (oppose to being assign them), you become more engage to deliver it. With the stand up meeting, you meet your pair every day with your TODO list... you want to respect your engagement you did the day before. With the iteration, you take the time to talk to each other and to identified what's going good and what's should be improve. You also take action to improve it and to continue doing what's working. Don't be afraid to change anything, even what I said ;) / even any basic of the Scrum itself... The real key is to adapt Scrum to what your team need to be happy and productive. You will not see it after one iteration, but many of them....
I'd highly recommend looking at what value sprints/iterations would give you. It makes sense to apply them when there are enough tasks to do that they need to be prioritized and when your stakeholders need to know roughly when something will be done.
In this case I'd highly recommend to combine three approaches:
schedule as many incoming tasks as possible for the next iteration earliest
use Yesterday's Weather to plan for how much buffer you need to plan for to deal with tasks that have to be dealt with immediately
use very short sprints, so as to maximize the number of tasks that can wait until at least the start of the next iteration
In fact that is true vor every Agile/Scrum project, as they are in maintenance mode - adding to an existing, working system - from iteration 2.
In case that iterations don't provide enough value, you might want to take a look at a kanban/queuing system instead. Basically, when you are finished with a task, just pull the next task from a prioritized task queue.
In my opinion it depends on how often you have a 'real' release. In our specific case we have one major release each year and some minor releases during the year.
This means that when a sprint is done, it's not immediately deployed on our production server. Most of the times a few sprints will take place before we have our complete 'project' finished. Of course we demo our sprints and we deploy our sprints on our testing-server. The 'project' in his totality will undergo some end-to-end testing and will finally be deployed to our production servers -> this is a minor release. We may decide that we will not deploy it immediately to our production server, when it's for instance dependant on other products/projects that need to be upgraden first. We then deploy this in our major release.
But when issues arrise on our production server, immediate action may be required. So, no time to ask a product owner for the goal or importance (if we even have one for sunc an issues) because it blocks our clients from working with our application. In such an urgent cases, these kind of issues will not be put into a Product Backlog or sprint but are pure maintenance tasks to be solved, tested and deployed as soon as possible and as an individual item.
How do we combine this with our Sprint? In order to keep our Team members focused on the sprint, we decide to 'opt in - opt out' our people into the Team. This means that one or more people will not be part of a Team for a certain Sprint and can focus on other jobs like these urgent fixes. The next Sprint this person will again be part of the Team and someone else will be responsable for emergency calls.
Another option could be that we foresee like 20% of the time in a Sprint for 'unplanned tasks' but this will give wrong indication about the amout of work we can do in a Sprint (we will not have the same amount of urgent fixes during each sprint). We also want our Team members to be focused on the Sprint and doing these urgent fixes in a Sprint will distract our team members. 'context-switching' will also mean time-loss and we try to avoid that.
It all depends on your 'environment' and on how fast urgent issues should be fixed.
Treat all "bug fixes" that don't have a story as new code. Estimate them, and work them as normal. By treating them as new stories you will build up a library of stories and tests. Only then can you begin to pin the behavior of the application.
Take a look at Working Effectively with Legacy Code by Michael Feathers. Here is a link to an excerpt. http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf
-Jason
I have had some success by recognizing that some percentage of a Sprint consists of those unplanned "fires" that need to be addressed. Even in a short sprint, these can happen. If the development team has these responsibilities, then during sprint planning, only stories are committed to the sprint that allow enough headroom for these other unplanned activities to occur and be handled as needed. If during the sprint, no "fires" ignite, then the tam can pull in stories from the top of the backlog. In many ways, it becomes a queue.
The advantage is that there is commitment to the backlog. The disadvantage is that there is this hole in capacity that can be communicated as an opportunity to drag the team into non-critical tasks. Velocity can also vary widely if that gap in capacity is filled with unplanned work that is not also tracked.
One way I have gotten around part of this is to create a "support" story that fills out the rest of that capacity with a task that represents the available hours the team can allocate to support activities. As support situations enter the sprint that cannot be deferred to the backlog, then a new story is created with tasks, and the placeholder support story and tasks are re-estimated to account for the new injection. If support incidents do not enter the sprint, then this support story is reduced as backlog items come in to fill the unused capacity.
The result is that sprints retain some of the desired flow and people feel like they aren't going to get burned when they need to take on supporting work. Velocities are more consistent, burndowns track normally, but there is also a record of the support injections that happen each sprint. Then during the sprint retrospective we can evaluate the planned and unplanned work and if action needs to be taken differently next sprint, we can do so. If that means a new sprint duration, or a pointed conversation with someone in the organization, then we have data to back up the new decisions.

Agile 40-hour week [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Have you ever worked on a (full-time) project where using Agile methodologies actually allowed you to accomplish a 40-hour work-week? If so, what were the most valuable agile practices?
Yes, I'm on a 40 hour (actually it's 37.5 hours or so, that's what my contract says) on a project that was run with SCRUM from the beginning. That was about 2 years ago and the first time we implemented SCRUM. It's the project with the least amount of overtime for me personally, and it's also a PC game we're developing. I'm not even in "crunch" mode right now even though we're shipping an open beta on Friday.
We have learned a lot since then about SCRUM and agile. The single most valuable lesson from my point of view is: pod sizes must be reasonable ... we started out with pods with 12-20 members, that didn't work out well at all. A maximum of 10 should not be exceeded. It's too easy to agree on "flaky" and "vague" tasks because otherwise the standup & task planning meetings would take too long. So keep the pod size small and the tasks specific and get the product owner or sign-off's together with those who will work on the task.
Also, with a bi-weekly task planning schedule you have to get every Product Owner to agree on the task list and priorities for the current sprint, and new task requests should be issued before that planning meeting or else it will be ignored for the current sprint. This forced us to improve on inter-pod communication.
Scrum and management that is willing to buy into it.
Fair sprint planning. When you negotiate your own sprint you can choose what your team can accomplish rather than have tasks being handed down from above. Having your sprint commitment locked in (management can't change it mid-sprint) gives freedom from the every changing whims of people.
A well maintained, prioritized backlog that is maintained cooperatively by the product owner and upper management is very useful. It forces them to sit down and think about the features they want, when they want them and the costs involved. They will often say they need a feature now, but when they realized they have to give up something else to get what they want their expectations become more realistic.
Time boxing. if you are running into major problems start removing features from the sprint rather than working extra hours.
You need managerial support for you process without it agile is just a word.
Did I mentioned enlightened management?
Not being able to complete the tasks in a 40 hour week could be due to several things.
I see that this could happen in the early sprints of a Scrum project because the team wasn't sure of:
the amount of work they can do in the sprint and might bite off more than they can chew, and
their ability to estimate accurately the amount of points to award to blocks of work, or
the amount of effort required to perform "a point's worth" of work.
They may also be overly optimistic in what they can accomplish in the time alloted.
After that we get into several of the bad smells of Scrum, specifically:
a team isn't allowed to own its own workload, and maybe
management overrides decisions on what should be in a sprint
If any of these cut in then you are:
doing Scrum in name only, and
"up the creek without a paddle."
There's nothing much you can do apart from correct any problems in the first list, but this will only come with experience.
Correcting the two points in the second list will require a major rethink of how the company is strangling, not employing, Scrum best practises.
HTH
regards,
Rob
It may sound tough, but let's be realistic. Use of agile or any other flavour of a software process has nothing to do with a 40 hour week. Normally the amount of weekly work hours is stipulated within the employment contract and developer can use their discretion to put any additional unpaid work.
Please let’s not attribute magic healing powers to whatever your preferred software process. It can provide a different approach to risk management, different planning horizon or better stakeholder involvement; however, unless slavery is still lawful there you live the working day starts when you come through the door and ends when you go home.
It is as much up to a developer to make sure that the contract of employment is not breached as to their management. Your stake is limited by the amount of pay you get and the amount of honest work hours you agreed to give in return, regardless of a methodology used.
Certainly.
For the me the most important things that helped (in order of importance):
Cross-functional team - having programmers, testers, technical writers and sales/services people in the same team and talking to each other daily (daily call) was great.
Regular builds and continuous integration
Frequent reviews/demos to stakeholders and customers. This reduces the risk and time lost to it only for the period of iteration (Sprint).
Daily Call or Stand up meeting
Adding to all of the above (inaccurate estimates, badly implemented Scrum, etc.), the problem could to be the lack of understanding of your team's Velocity, something as simple as "how much work a team can accomplish", but which is not that easy to find as it may seem.
I've worked at several shops that practices various agile methodologies. The most interesting one had 4 "sessions" throughout the day that were about an hour and a half long, with a 20 minute break in between. Friday was Personal Dev day, so the last two sessions were for whatever you wanted to work on.
The key things for us were communication, really nailing down the concept of user stories, defining done to mean "in production", and trust. We also made sure to break the stories down into chunks that were no more than a day long, and ideally 1-2 development sessions. We typically swapped pairs every session to every other session.
Currently I run a 20+ person dev team which is partially distributed. The key tenant for me is sustainable pace - and that means I don't want my teams working > 40 hour weeks even occasionally. Obviously if someone wants to stay late and work on things, that's up to them, but in general I fight hard to make sure that we work within the velocity a 40-hour week gives us.
As both a Scrum Master and personnel manager, I have been a strong advocate of the 40-hour work week. I actively discourage team members from working over 40 hours as productivity drops quickly as the work-life balance shifts. I have found recuperating from an late-night work day often takes longer than the extra hours worked.
When it is well-run, Scrum aids in minimizing the "cram" that often occurs at the end of an iteration by encouraging (requiring?) a consistent pace throughout and tools like velocity and burndowns work well to plan and track progress.

Resources