How do you apply Scrum to maintenance and legacy code improvements? [closed] - project-management

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As the title suggest...
How can I apply a scrum process to anything that doesn't work on new code and can be estimated to some degree?
How can I apply a scrum process to maintenance and emergency fixes (which can take from 5 minutes to 2 weeks to fix) type of environment when I still would like to plan to do things?
Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?

Basically, how do I overcome unplanned tasks and tasks that are very difficult to estimate with the scrum process? or am I simply applying the wrong process for this enviroment?
You're using the wrong process for this environment. What you need is a stack/queue management process which is separate to your planned/estimated SCRUM development process.
The reason for this is simple and twofold:
As you mention in your post, it is
often very difficult to estimate
maintenance tasks, especially where
legacy systems are involved.
Maintenance tasks in general and
legacy systems specifically have a
tendency to involve 'curly'
problems, or have a long 'tail',
where one seemingly simple fix
requires a slightly more difficult
change to another component, which in
turn requires an overhaul of the
operation of some subsystem, which in
turn... you get the point.
Quite often when dealing with
maintenance tasks, by the time you
have finished estimating, you have
also finished solving the problem.
This makes the process of estimation
redundant as a planning tool. Those who insist on dividing estimation from solving the problem
for maintenance tasks are
simply adding unnecessary overhead.
Put simply, you need a queueing system. It will have these components:
A 'pool' of tasks which have been
identified as requiring attention.
Newly raised items should always go
into the pool, never the queue.
A process of moving these tasks out of
the pool and onto the queue. Usually a combination of business/technical knowledge is required.
A queue of tasks which are clearly ordered
such that developers responsible for
servicing the queue can simply pick
from the front of it.
A method for moving items around in the queue
(re-prioritising). To allow 'jumping the queue' for critical/emergency items.
A method for delivering the completed items which does not interrupt servicing the queue. This is important because the overhead of delivering maintenance items is usually significantly lower than development work. You don't want your maintenance team sitting around for a day waiting for the build and test teams to give them the ok each time they deliver a bugfix.
There are other nuances to queue management, but getting these in place should set you on the right path.

If you have that much churn in your environment, then your key is going to be shorter iterations. I've heard of teams doing daily iterations. You can also move towards a Kanban type style where you have a queue which has a fixed limit (usually very low, like 2 or 3 items) and no more items can be added until those are done.
What I'd do is try out one week iterations with the daily stand-ups, backlog prioritization, and "done, done". Then reevaluate after 5 or 6 weeks to see what could be improved. Don't be afraid to try the process as is - and don't be afraid to tweak it to your environment once you've tried it.
There was also a PDF called Agile for Support and Operations in 5 minutes that was recently posted to the Scrum Development list on Yahoo!

No one said that backlog items have to be new code. Maintenance work, whether bug fixes, enhancements or data fixes can be put into the Product Backlog, estimated and prioritized. This is actually one of the biggest benefits of using Scrum - no more arguments with users about whether something is a bug fix or an enhancement.
With Waterfall, there's a tacit understanding that bugs are the responsibility of the developers. Somehow, they are on the hook to fix them without impacting the development of new code and features. So they are "free" to the users, but a massive inconvenience to the developers.
In Scrum, you recognize that all work takes time. There is no "free". So the developers freely accept that something is a bug but it still goes into the Product Backlog. Then it's up to the Customer to decide if fixing the bug is more important than adding new features. There are some bugs that you can live with.

As the title suggest... How can I
apply a scrum process to anything that
doesn't work on new code and can be
estimated to some degree?
On the contrary, I've heard teams find adopting Scrum easier in the maintenance phase.. because the changes are smaller (no grand design changes) and hence easier to estimate. Any new change request is added to the product backlog, estimated by devs and then prioritized by the product owner.
How can I apply a scrum process to
maintenance and emergency fixes (which
can take from 5 minutes to 2 weeks to
fix) type of environment when I still
would like to plan to do things?
If you're hinting at fire-fighting type of activity, keep a portion of the iteration work quota for such activities.. Based on historical trends/activity, you should be able to say e.g. we have a velocity of 10 story points per iteration (4 person-team 5day-iteration). Each of us spend about a day a week responding to emergencies. So we should only pick 8 points worth of backlog items for the next iteration to be realistic. If we don't have emergency issues, we'll pick up the next top item from the prioritized backlog.
CoryFoy mentions a more dynamic/real-time approach with kanban post-its in his response.
Basically, how do I overcome unplanned
tasks and tasks that are very
difficult to estimate with the scrum
process? or am I simply applying the
wrong process for this enviroment?
AFAIR Scrum doesn't mandate an estimation technique.. Use one that the team is most comfortable with.. man days / story points / etc. The only way to get better at estimation is practice and experience I believe. The more the same set of people sit together to estimate new tasks, the better their estimates get. In a maintenance kind of environment, I would assume that it's easier to estimate because the system is more or less well known to the group. If not, schedule/use spikes to get more clarity.
I sense that you're attempting to eat an elephant here.. I'd suggest the following bites
Working Effectively with Legacy code
Agile Estimating and Planning
Agile Project Development with Scrum

Treat all fixes and improvements as individual stories and estimate accordingly. My personal view is that things that take less then 10-15 minutes to fix should just be done straight away. For those that take longer, they become part of the current 'fix & improvement' iteration cycle. As with estimating regular requirements, you take a best guess as a team. As more information comes to light, and the estimates are off adjust the iteration and upcoming sprints.
It's hard to apply a iteration cycle to fixes and improvements as more often then not, they prevent the system from working as they should and the 'business' puts pressure for them to go live asap. At this point it may work out better moving to a really short iteration cycle, like one or two weeks.

You ask about how to use a process for emergencies. Try to reserve the word "emergency" for things that require hacking the code in the production environment with the users connected at the same time. Otherwise, stakeholders are likely to abuse the word and call an emergency on anything they would like to have really fast. Lack of process does not mean out of control: somebody must be accountable for declaring the emergency, and somebody (best practice is somebody else) must authorize the changes outside the normal process and then take responsibility for it.
Other than that, the suggestion of using each iteration to complete a number of fixes and improvements is probably the best way to go.

This depends a lot on the application life cycle. If this is a 'Sunset' application to be retried soon, of course, the focus would only be on fixing the top priority bugs.
If the product is 'Mature' and has a roadmap and is continuing to evolve you would have fixes and enhancements to take care of. There's a lot of impetus to keep the design clean and evolve by refactoring. This may mean periodic minor and major releases [except for eFixes - emergency fixes/hotfixes]. You can pratice agile to your hearts delight as enhancement and fixes could be story boarded and made part of your Sprint backlog. The entire list would make your Product Backlog.
Bottom line: If you want to refactor and keep your application design clean [programmers tend to take shortcut if the focus is exclusively bug fixing], it could only happen with a 'living' application. One that is evolved and updated. And agile is natural fit.
Even if you have only fixes (that it's a Turing Complete ;) or Sunset application), it helps if they can all be rolled into a sprint and rolled into production end of each sprint. If the fixes need be rolled into production as and when they're fixed, it's much more difficult to apply Scrum.

We have applied scrum in this context.
Some of the keys of success.
1. Everyone in the enterprise buy the scrum idea ( this is crucial for success )
2. Sprint of about 2 weeks (our 2-3 firsts sprint where of 1 week to understand the process)
3. Under no circumstance a point could be added to the current sprint
4. If a real emergency arise stop the sprint, do a retrospective and start a new sprint.
5. Take time for retrospection (time to share though about the last sprint and to analyze it)
6. In a sprint insert at least one task to improve the process (often added to the backlog in the retrospective); it's good for the troop morale and in the end of the day you will be in your way to have less emergency.
7. TIME BOXED daily stand up meeting
For the estimation, usually the more you estimate the more precise you become. What is good with Scrum is that each programmer pick his task and can set a new estimate if he think it's not realist. And if you still have some issue with estimation, let your team find a solution... you can be surprise with what they come with.
For the 2 weeks fix. If it's the original estimation, cut it in smaller pieces. If you did an more optimistic estimation (let say 2-3 days), the issue should rise as a blocker in the stand up meeting. Maybe somebody else have ideas about how to fix it. You can decide to do some pair programming to find a solution. Sometime, just to describe the bug to another programmer help a lot to debug. You can also delay it after other task in the sprint. The idea is to deliver fully functional tasks. If you don't have time to fix it in full and to demonstrate it, even if your at 90% done (yeah! we know what it means), you consider it not done in the sprint. In the next sprint you will be able to address it with the correct time estimation.
Finally, from what I understood, Scrum is more about having "tools" to improve your process. You start with something simple. You do small iteration. In each iteration you have a FIX TARGET to complete instead of an infinite bug list. Because you pick your tasks from the list in the planning (oppose to being assign them), you become more engage to deliver it. With the stand up meeting, you meet your pair every day with your TODO list... you want to respect your engagement you did the day before. With the iteration, you take the time to talk to each other and to identified what's going good and what's should be improve. You also take action to improve it and to continue doing what's working. Don't be afraid to change anything, even what I said ;) / even any basic of the Scrum itself... The real key is to adapt Scrum to what your team need to be happy and productive. You will not see it after one iteration, but many of them....

I'd highly recommend looking at what value sprints/iterations would give you. It makes sense to apply them when there are enough tasks to do that they need to be prioritized and when your stakeholders need to know roughly when something will be done.
In this case I'd highly recommend to combine three approaches:
schedule as many incoming tasks as possible for the next iteration earliest
use Yesterday's Weather to plan for how much buffer you need to plan for to deal with tasks that have to be dealt with immediately
use very short sprints, so as to maximize the number of tasks that can wait until at least the start of the next iteration
In fact that is true vor every Agile/Scrum project, as they are in maintenance mode - adding to an existing, working system - from iteration 2.
In case that iterations don't provide enough value, you might want to take a look at a kanban/queuing system instead. Basically, when you are finished with a task, just pull the next task from a prioritized task queue.

In my opinion it depends on how often you have a 'real' release. In our specific case we have one major release each year and some minor releases during the year.
This means that when a sprint is done, it's not immediately deployed on our production server. Most of the times a few sprints will take place before we have our complete 'project' finished. Of course we demo our sprints and we deploy our sprints on our testing-server. The 'project' in his totality will undergo some end-to-end testing and will finally be deployed to our production servers -> this is a minor release. We may decide that we will not deploy it immediately to our production server, when it's for instance dependant on other products/projects that need to be upgraden first. We then deploy this in our major release.
But when issues arrise on our production server, immediate action may be required. So, no time to ask a product owner for the goal or importance (if we even have one for sunc an issues) because it blocks our clients from working with our application. In such an urgent cases, these kind of issues will not be put into a Product Backlog or sprint but are pure maintenance tasks to be solved, tested and deployed as soon as possible and as an individual item.
How do we combine this with our Sprint? In order to keep our Team members focused on the sprint, we decide to 'opt in - opt out' our people into the Team. This means that one or more people will not be part of a Team for a certain Sprint and can focus on other jobs like these urgent fixes. The next Sprint this person will again be part of the Team and someone else will be responsable for emergency calls.
Another option could be that we foresee like 20% of the time in a Sprint for 'unplanned tasks' but this will give wrong indication about the amout of work we can do in a Sprint (we will not have the same amount of urgent fixes during each sprint). We also want our Team members to be focused on the Sprint and doing these urgent fixes in a Sprint will distract our team members. 'context-switching' will also mean time-loss and we try to avoid that.
It all depends on your 'environment' and on how fast urgent issues should be fixed.

Treat all "bug fixes" that don't have a story as new code. Estimate them, and work them as normal. By treating them as new stories you will build up a library of stories and tests. Only then can you begin to pin the behavior of the application.
Take a look at Working Effectively with Legacy Code by Michael Feathers. Here is a link to an excerpt. http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf
-Jason

I have had some success by recognizing that some percentage of a Sprint consists of those unplanned "fires" that need to be addressed. Even in a short sprint, these can happen. If the development team has these responsibilities, then during sprint planning, only stories are committed to the sprint that allow enough headroom for these other unplanned activities to occur and be handled as needed. If during the sprint, no "fires" ignite, then the tam can pull in stories from the top of the backlog. In many ways, it becomes a queue.
The advantage is that there is commitment to the backlog. The disadvantage is that there is this hole in capacity that can be communicated as an opportunity to drag the team into non-critical tasks. Velocity can also vary widely if that gap in capacity is filled with unplanned work that is not also tracked.
One way I have gotten around part of this is to create a "support" story that fills out the rest of that capacity with a task that represents the available hours the team can allocate to support activities. As support situations enter the sprint that cannot be deferred to the backlog, then a new story is created with tasks, and the placeholder support story and tasks are re-estimated to account for the new injection. If support incidents do not enter the sprint, then this support story is reduced as backlog items come in to fill the unused capacity.
The result is that sprints retain some of the desired flow and people feel like they aren't going to get burned when they need to take on supporting work. Velocities are more consistent, burndowns track normally, but there is also a record of the support injections that happen each sprint. Then during the sprint retrospective we can evaluate the planned and unplanned work and if action needs to be taken differently next sprint, we can do so. If that means a new sprint duration, or a pointed conversation with someone in the organization, then we have data to back up the new decisions.

Related

What factors do you consider when deciding what to work on next? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
Lately I've been feeling like I'm being pulled in different directions. In my company there are a lot of forces demanding my time and I'm having a hard time deciding which direction to focus my energies.
I have the choice of several different coding projects, some of which could demand a lot more work over time, and present unknown challenges.
How do you decide what to work on next in the big picture? Grease the squeakier wheels? Low hanging fruit? (translation: easier projects)
Do you have a system for determining and reaching your goals?
We've used a value graph to identify projects based on value vs. effort, as a part of lean.
A few of the things I ask that affect what I work on :
Are any of my tasks holding up other developers or keeping other people from being able to do their work? If so, it'll probably get done first.
Do I have any tasks with deadlines coming up? If so, it might be the next candidate to work on, unless I can justify something else being worth the task slipping schedule.
Will work on any of the tasks affect (make obsolete / make easier / make more difficult) any of the other tasks I have? If so, it might get moved up.
Is there a good chance that any of the tasks will change (requirements still not concrete / other tasks out there that might affect it), making time spent on it now likely wasted? If so, it gets moved down.
Are there any things that just really bug me, that I think I can get done before anyone notices that I'm not working on whatever my boss thinks is the highest priority. (which is justified because it'll distract me from thinking about the other things).
Are there any tasks I should work on while things are still fresh in my mind?
... other than that, we go with a combination of whatever sort of fixed deadlines I have, what tasks I have that might be holding up other people, what the boss wants first (I like this boss, and I only have one giving me tasks ... in the past, I'd've answered differently), which ones seem more interesting, which ones I can get done quickly to just get them off the list, etc.
There have been times when I've had more than one manager, and I just had to put everything on a list on a whiteboard, and told them to number them (17 items, which kept growing). Management balked, but I was sick of getting bitched out at meetings week after week with stuff not being done, and having to go through the list of every 'emergency' task I was given. (and being told that any manager in the department was allowed to task me in the case of an emergency ... which was something like 30 people ... and being bitched at when I asked who got to decide if it was an emergency or not.)
When I have the choice: whatever seems to be the biggest challenge with the most fun attached.
Fun + challenge = rapid learning, to me.
And sometimes that takes me away from the technical stuff - people can be a fun challenge too.
I tend to weigh 4 items based on things when deciding what to work on next:
Is this item a requirement for something else?
Can I work on this item yet (ie, am I waiting on something for it)?
How fast/easily can I get this item done?
How interesting do I find the work required for this item?
I my current team (working on a variety of Business Intelligence software projects), we've recently started adopting a variant of classical "agile" project planning and estimating -- everybody seems to be pretty happy with it so far, including us (developers of different levels of experience), the product managers (highly technical people, typically also with some development experience, but mostly interested in the business side of things), management (pretty technical at the level we report to, but also less-technical, more-businessy directors and VPs), and other stakeholders (users and would-be users of our software). But, of course, it is early times, and we'll adjust as we go along. (In the past few years I used other variants of this in very different application areas, such as cluster management software; but I've often also used more ad-hoc, less-structured approaches).
The outline is as follows. At each iteration (we're currently on 2-week iteration cycles), the PMs choose some "elementary units of business value" that they might like to get from projects in our area -- a typical unit would be one feature, a bug fix, some optimization aspect, etc. In a small meeting with tech leads and one or two senior engineers, each unit is decomposed into engineering tasks (and dependencies among tasks are identified). In a larger whole-team meeting, the relative "cost" of each task (how much time, roughly, it will take to perform that task relative to other tasks) is collectively assessed (we're using completely abstract units of effort we call "points", though I've seen other teams use less abstract units such as "ideal engineering-days"). The costs assessed include unit testing and technical documentation.
The tasks, each with its assessed cost, go on what's called "the backlog" for the team, together with "internal restructuring" tasks (typically refactorings that will deliver no new user-observable plus, but will make further development and maintenance more productive), also with assessed costs and a summary of expected benefits (which must be expressed in ways understandable to the PMs -- fortunately, as I said, ours are highly technical people). A refactoring may also, by engineering team consensus, be deemed a prerequisite of certain business-requested tasks (e.g. "makes no sense to work further on component X until class Y, too large, is properly split, which will take N points").
The PMs now get to order the tasks in the backlog in any way they prefer, based on the business value that completing the units those tasks make up would deliver, subject to the dependency constraints. They have a good idea of roughly how many "points" the team may accomplish in a 2-week iteration (our "velocity") based on past results, so they try to make sure some business-valuable release can be performed at the end of the iteration (as opposed to having many business-valuable thingies "in flight"... but none completed and deliverable to stakeholders yet!-).
The team then devotes about 80% of its time and effort to tackling the top-priority tasks as designed by the PMs (including pair programming some of the time, for particularly urgent tasks or for situations where one team member needs to learn more about some technology or some part of the codebase, and another team member who's an expert in those is available for pairing up with them for a while). The priority order is an important indication, but it's not entirey rigid (e.g. if the top task requires extensive work in Java, and the second one requires it in Python, I may well pick the second one, as my relative productivity will be enormously higher that way -- and vice versa for a team member who's a Java guru, etc etc).
"Priority 0" aka "Code Red" issues may arise at any time, and, if they do, by definition they will take priority over any other task (and be accounted for only retroactively in the planning, to make sure velocity is assessed properly). But, as we do a pretty good job with testing, release engineering, and other quality-assurance practices, those emergencies are fortunately few and far between.
This, plus other "mandatory" ways for engineers to spend their time (training courses, all-hands meetings, quarterly performance self-evaluations and peer reviews, etc), is supposed to account for about 80% of engineers' time -- the remaining 20% being time each engineer should devote to "something completely different" ("blue-sky" exploratory projects, "egineering community" efforts, open-source contributions, etc, etc), not directly related to the projects at hand. Nobody actually measures the hours precisely, but that's still a useful guideline (I keep thinking of ways to make measurement easy and painless that I could implement in my 20% time, to help me allocate time and effort more precisely, but I haven't actually gotten any round tuits yet;-).
Easy. I ask my boss.
High value + Low risk.
Only go on high value + something with higher risk if you already have a track record in the company / credibility.
Easy: What is the highest and best use of my time.
If I am involved in a project and that isn't the answer to that question, I ask myself why I'm working on it and how soon can I finish it.
:) On Job i left this decision n my Project Leader and Team Leader, As they know better
"What is project priority"
At home, i do where i see fun, learning and challenge

			
				
When I have a choice about which job to start next I try to find a balance between two things: quick, easy fixes that are highly visible (e.g. fix the non-critical bug that a user has been complaining about) and taking on a project where I can use something I've been learning about. I find if I alternate between those types of jobs I can keep myself and my co-workers happy.
I'd look at what has the highest priority from management's perspective for an intial prioritizing of upcoming projects. If they are all priority 1 projects, then there are a few other factors that may help my decision:
Do I see how valuable the project will be to the organization? Is this the type of thing that really helps with the competitive advantage that we have?
Does there seem to be a buildup of projects of a specific size? For example, are lots of little projects being ignored for the few really big ones? If so, I may take some of the little ones that may be seen as quick wins that may help my team look good.
Do any of these projects use my strengths? This can be a bit hard to determine but it could help a lot with motivation, at least using the Marcus Buckingham interpretation of a strength.
What teams and structures are in place for the other projects? I don't think I'd want to join a project that looks like a massive train wreck about to happen. Is there enough structure so that I won't go off and do my own thing that may hurt the project's chances of success? Do I believe I could handle working with X using methodology Y and technology Z?
Those are a few of how I'd look at making the decision, along with talking to my manager as part of this is his job, right?
You should ask yourself a question. Are you pursuing a general IT career path which may or may not include your current company, or, do you intend to have a long career with your current employer?
If you intend to have a succesful IT career moving around differnet employers then, sadly, the most succesful strategy is "buzzword collecting". Identify the current/next big thing and try to get it on your CV. e.g. FInd a trivial AJAX with SOA back end project which may never go to production, this will enhance your value to future employers even if the project had little value to your current temployer.
If you plan on a long career with your current employer, the most succesful strategey would be to align your goals with the business. For instance the most critical project for the business may be upgrading an old unsexy VB/Oracle stock control package to include an ancient EDIFACT interface with a new supplier. If you are seen as a key player in the success of such a project you will rank very highly (and rightly so) in your employers esteem, and your opinions and advice will be taken seriously.
Since you didn't specify whether you talk from developer or manager perspective, I'll try to cover both.
Providing a framework for prioritisation of efforts is a management’s direct job. The immediate day-to-day prioritisation may stay with management or be handed over to developers.
The decision who should work on what and when in average company is likely to be perceived as a matter of power, control and prestige by both groups and one who makes the most prioritisation decisions as clearly more important player.
In shrewd companies, however, it is well understood that decisions have several interesting properties:
Each takes time and effort to make, which is diverted from doing the actual work.
Every decision is a trade off
To make a good trade off one who makes the decision needs all the right information in her or his disposal
Subsequently, management doesn’t have all the information to make every decision and nor they likely to have right information to make a good trade off in each case, but developers cannot spend their time doing hundreds of prioritisation decisions per day instead of producing software, nor do all necessary co-ordination.
Hence the solution is for management to create a simple framework for task assessment and prioritisation and hand it to developers who will quickly apply it on case by case basis, filling the gaps. In management lingo such framework is called strategy; it saves time by removing repetitive redundant decision making, gives focus and consistency to the efforts, and provides direction. It should be detailed enough to remove the burden of re-assessing the situation each time, but loose enough to allow developers to make right choices when it matters.
The framework itself may give very straightforward rules for making decisions or, alternatively, provide some analytical methods such as Pareto, SWOT, Cost Benefit, Expected Return analysis or Porter Five Forces etc. However, it is worth keeping the rules simple, unambiguous and as straightforward as possible.
Joel Spolsky made available to the world several very good internal software strategy documents written in plain English. Not all documents are directly to do with developing software (showing that it viable actually to have a different unrelated decision frameworks for various aspects of the company life). Also since the documents are several years apart it’s actually possible to see how these frameworks kept changing to suit the situation:
Fog Creek Compensation
Our .NET Strategy
Set Your Priorities
Fruity treats, customization, and supersonics: FogBugz 7 is here
If you're intrested in choosing what things to work on from a personal point of view one of the best advices around in my opinion is the one given by Paul Graham in his essay "What You'll Wish You'd Known".
Fundamentally as software developers we are business enablers. Your priorities should be in tandem with business priorities and be pragmatic between quick wins and larger strategic initiatives. Effort and Priority make an excellent matrix in which to score projects taking the least effort/highest priority first.
From the tone of your question it sounds like business priorities are either unclear or there is conflicting direction between stakeholders. This is the place to start and it will make your decisions much easier once it is resolved.
You really need to discuss this with the business because only they can tell you what has the highest value to them. After that, I would go after the items that carry the most risk because if something is going to cause a schedule to slip it's best to know early rather than late.
If you're having trouble with what the business priorities - usually caused by being on multiple projects with different stakeholders who all think that their project is the most important - you can try getting both stakeholders in a room to discuss which project is higher priority. Or you could delegate that negotiation to your manager as that is actually his job.
I tend to work on multiple projects at one time, so I will work on a harder project, make some headway, and when I get stuck and need to think about how to do the next part, I will go to some low-hanging fruit, so that I can continue to make headway, as I give my subconscious time to work on the harder problem.
But, it really depends on your priorities. I have never been good at just trying to impress people, so I just quietly go about trying to get work done.
If we're talking in a work environment, i go through and just prioritise things - what is mission critical, what is urgent, and then anything else just gets put into the list and it gets done in the order it comes in.
As for picking the next big project at work, i like to do what offers the greatest challenge. I had been working as a developer for a year and i had the opportunity to do some work for a very large company working with some security experts and doing things i'd never done. So i chose that, and it looks great on my resume.
In terms of personal development work (not as in self help), again i'll offer something that challenges me. It's got to be something that i haven't done before. It doesn't matter if someone else has done it - i haven't, and i can learn from that.
In the end, it all comes down to what value it holds for you, and what value it holds for the client. Luckily, i've got a few years of sales experience under my belt as well, so i can easily sell the products i need to to clients.
If your problem is one of procrastination, then perhaps you need to focus on getting rid of those jobs that you fear tackling most - or at least making some forward progress on them to reduce the stress of considering how far behind you are.
This book, by Mark Forster, provides some good tips.
Failing that, you might want to produce an iteration plan. Let everyone vote for jobs - whatever gets the most gets scheduled right away. that way every stakeholder, including yourself gets some input into scheduling.
I would ask the boss, if they wont make the decision then I would go for the project which I felt was going to be best for the company, in profit and moral of the team.
If I was torn between too projects i would go for the one that sounds like I can develop my skills more and be interested in most.
If a project sounds exciting I become more driven and determined too :)
Given the nature of your question, I'm assuming this is all work that somebody thinks you should be doing, but there clearly isn't enough time to do it all. Therefore, you are just looking for priority knowing full well that some items are likely not to get done.
Impact/Risk if the item isn't done.
Visibility - Does anybody else really care about this task
Alignment with department goals - weed out things that really aren't your job
Alignment with company goals - weed out things that aren't important to your company's business.
Enjoyment factor
Alignment with career goals - Many people would rank this item significantly higher. Depends how important your career is versus what you do today. I've rank today's enjoyment a bit higher than long term career goals. Some projects may be horrible, but they can move along your career.
I guess it depends on how much is on the list. If there's a lot of low hanging fruit that's been on the list for a while, it may be worth while to take some time and clean some of it off. That way, there would be less demands on the available time and potentially more time or incentive available to work on the big projects.
Plus it can be cathartic to be able to cross a bunch of stuff off the list.
I will usually start working on a larger project first. Then when I feel I need to step away from it for a bit, usually so I can approach it with a clear mind later, I try to kick out some of the quick one-off tasks or simple projects.
I know that isn't very descriptive, but calling the occasional audible for distraction seems to work out well for me when tackling a big project list.
I tend to look at project from a learner prospective. I tend to choose a project that will help me learn something new, I also look for "cool" and intersting as well.
On tthe other hand you can choose your next project according to where it would lead you. Ask yourself if you have a career goal that project X will help you achieve. Perhaps a high profile project is better then intersting - at least for a short while.
One way to defide is to define several key pints that matter to you (i.e. new technology, intersting etc.) and try to rate each opportunity and see which gets the higher score.

How do you manage multiple clients requirments that compete for limited resources [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
So I manage projects for an internal development team. That is All our clients are internal to the business.
I am trying to figure out he best way to collate requests and allocate the resources that we have. (unfortunately the business dosn't have an endless budget for developers, but they still want everything done)
I would like to do some sort of backlog and feature planning but how do i do that across multiple clients and multiple products.
And last but not least the emergency requests. Someone says we need this tomorrow, how do you deal with that. I seem to have one every week, which we can do we just never eat into the feature backlog.
Depending on how cooperative the sub-projects are, you can either manage the features and the backlog in a single order of priority (thus, the most important task to the organization comes first - transparent, but the decision makers must be able to converge on the priorities), or splitting the time / team and then managing the priorities with each client within the time or resource slice allocated to the client. Organization-wide this is not the most efficient way, but once this hard decision is taken you are free to manage each project independently.
Exactly the same goes with the emergency requests. An enlightened organization would be able to prioritize this across different divisions - i.e. an urgent security fix ranks any feature regardless of ownership, but more likely the "emergency" stuff will either be done as a part of the "slice" allocated to the client, or according to how much pressure the "emergency requester" can produce.
It looks like you have a lot of change going on. If you are not yet using an agile process, such as XP, you should try - at least the managing requirements/features and delivering often parts. Delivering changes often can also relieve some of the pressure on you as the progress from task to task becomes more visible.
I agree with MaxVT that you may want to look at something like XP, as, even if you are delivering programs that have limited functionality, they are still getting something that can help.
You may also want to start setting expectations of those making the demands, based on what you already have in the backlog list.
This would mean that you should go through your backlog, with the developers, and determine what tasks need to be done, and assign time to each task. This will give a way to state that for project A there is x hours, so you can tell someone when you may be able to get to it.
By being agile you can look at what features must be implemented for the app to be useful, and then get the application to that point. You can then track features to be added, in some priority fashion, so you implement the most important features first, then as you have time you add more functionality to it.
Knowing what must be done for each feature and how much time it should take will better help you with the project management, and for you to generate your reports for those above you.
When you have limited resources and emergencies are eating up most of your time, one of the best things you can do (aside from dealing with the emergency), is to make the effect visible so that you have data for higher-level discussions with stakeholders (e.g., "was this really an emergency? what could have been done to keep this from having become an emergency?", etc.)
In thinking about a prioritized backlog, you're on the right track, in part because the discussion over priorities is one that you can (or might be able to) push back on your internal customers.
For emergency "skip the queue!" work, one thing I've seen work is to use "red cards", either skip those right to the head of the queue, or rate limit them to only have one in progress at at time, so that you can be making progress with non-emergency work with the rest of the team that's not working on the emergency.
A key part of whatever approach you take is to gather enough data to support having data-driven discussion with your customers. In the simplest form, have the team give rough estimates for the size of each item in the backlog, so that you can both track how much work you're getting done each week, and how much work is in the backlog. After a few weeks, you'll have supporting data to take to your boss to support a "We're spending 90% of our time dealing with emergencies. What are we going to do about that?" discussion.
its all in the scheduling and resources (more about resources in a moment). there is also an aspect of change control to your problem.
scheduling
i use google spreadsheets because its simple, everyone is comfortable/familar with it, and everyone can access it simaltanious (its free too!).
if you look at the picture below, you can see there is an 'Assigned To' column - thats the inital of the programmer. you would have multiple schedules like this, one for each project.
in the second image, you can see some summaries. including how many hours total the programmer has been assigned on the project. this lets you plan when they will be available to work on another project.
the rest of the article is here -> Project Schedules with Google Spreadsheets
resources
you may have your project schedules in shape but what do you do if all your in-house programmers are busy on projects already? you are effectively out of resources. this is where outsourced contractors can help. if you dont have budget for this, then people simply have to wait until programmers come off the projets they are assigned to (e.g. "sorry boss, but greg wont finish his current project for another 2 weeks. if you want to pull him off that project, we can, but then that project will deliver late" <- good luck with that :)
bug management & change control
there are approaches for bug management and change control too, but i dont want to get into them too deep other then to say slot in 2-3 hours a week into each programmers outlook calender for debugging. with features that dribble in, gather together 5-10 and document them before you hand them to a programmer to code (make a package out of them, like v1.1).
There are two sides to this, the practical and the political. The first is straight forward, the second is a mess.
Some of this is horrible management/marketing bollocks sounding so I apologies in advance but it is good.
To sort out prioritisation on a practical level I start with a high level assessment of effort and benefit, each on a scale of 1 to 10 and then plot them on a matrix (effort on the X axis, benefit on the Y axis). The business have to assess benefit (and justify it), the IT team and the business jointly assess effort (the business team don't get to mess with IT's estimates but they may need to add in their own time).
Broadly speaking anything in the high effort low benefit quadrant (bottom right) gets killed immediately - to much for too little. Anything in the high effort, high benefit (top right) gets classed as a major project and needs to be investigated further as these are typically where you'll find one or other assessment is out. Anything high benefit low effort (top left) is a quick win and jumps to the top of the list. Finally things in the low effort low benefit (bottom left) are question marks. As with major projects these need to be examined further though in this case you're trying to turn them into either a quick win (you may be able to modify the scope to increase the benefit or decrease the effort) or to show that they're actually going to be more work than people think and should be killed.
But generally speaking the closer to the top left corner (low effort, high benefit) things are, the sooner they should get done.
One critical thing - all the information and the whole matrix is public - if there are disagreements the developer is just administering the process, not adjudicating on it. Publish the latest matrix (and resulting schedule) on a weekly basis and have regular calls between all interested parties to ensure that everyone agrees with it.
Which comes to the political side - where you know that something is a waste of time but it's being asked for by someone very important. Ideally this is where your manager earns his keep but generally if you have a transparent process (that is the benefit/effort scores and the matrix are all public), the guy pushing for something that's not worthwhile will either (a) struggle because everyone else pushes back pointing out that his change is trash and shouldn't happen at all let alone take priority, or (b) have enough clout that it doesn't matter but at least you're covered as the other people who would pressure you understand that.
And the transparent process is how you deal with the last minute requests. If everyone who has requests in knows where they are in the queue, people dropping things in at no notice becomes harder for others as they're not just messing with the developers, they're messing with their peers and pushing back their projects.
Generally speaking those who do it will either have enough clout that you couldn't stop it anyway or irritate the other people asking for changes enough that they'll be forced to minimise their bad behavior.
One interesting subset of these last minute issues is time dependent projects, that is ones where the benefit is greater if it is done this week than if it waits six months. An example might be where there's a three month window before a legislative tax change comes in and the project would allow the company to take advantage of that.
In that case the project gets assessed on it's maximum feasible benefit then reassessed when it's placed in the schedule - so if with it's maximum benefit it happens tomorrow then great, but if even with it's maximum benefit it can't happen for two months you have to drop the benefit based on what it would deliver at that point in time and reassess (normally this kills it).
So in short:
Good benefit / effort assessment allows you to pick those which will positively impact the business most
Transparent well defined process which the developer administers rather than adjudicates on.
If it was development for external clients, I would say that you need to charge a little more for "Emergency" requests. However, short of changing your organisation's internal charging model (and ticking of your F&A branch), there's not much you can do on that end.
You have a good case for bringing in one more developer, but of course $$$'s always tight.
Frankly, I love FogBUGZ (and to be fair there are other case/incident trackers out there, by FB is my favourite). I can see quickly what I have in the queue and their priority (regardless of project/client).
The other pieces of advice I'd give you are these:
Advertise the size of your work queue. There may be some internal secrecy required, but perhaps you can post on a wall/website all the project s you have on the go.
Get organizational priorities for the existing projects. Someone in your organisation must have an idea of what is needed when, and what is most important - be it the manager, CEO or BOD.
.
Everything else goes at the end of the queue.
.
If Bob has an emergency request and you are working on Jim's and Mary's requests; make Bob go through Mary and Jim first to get permission to leap frog them in the queue (or if necessary go above their heads).
Or, Bob can choose to push one or more of his projects that are higher in the queue down and replace it with his Emergency request. Either way, it imparts an inherent extra cost on the the emergency request. Things that aren't really emergencies tend not to be called that any more. It kind of like that ER line "Mark every lab request as STAT even if you don't need it right away"
e.g.,
[Jim1] <- [Mary1] <- [Bob1] <- [Jim2] <- [Jim3]
[Jim1] <- [Mary1] <- [Bob1] <- [Jim2] <- [Jim3] <- [Bob2]
|-------------swap-------------|
[Jim1] <- [Mary1] <- [Bob2] <- [Jim2] <- [Jim3] <- [Bob1]
The answer for your internal (and external) clients is almost always yes (sometimes no for technical reasons). However, that "yes" is always followed by a "but".
You - Yes bob, we can work on that project for you, but we can't start until
around X months from now.
bob - That's not good enough
You - OK, you have a couple options. If you want to shift some of your other
requests around we can push it to the start of your queue, that would get it
started in Y months.
Bob - Still not good enough
You - Well, you can negotiate with Jim and Mary to leapfrog their projects.
Bob - They won't budge
You - Then go over their heads, and talk to Mr. Dithers`
[some time later]
You - Yes Mr. Dithers, we can work on Jim's requests. You are aware that will
push back Jim and Mary's requests.
Mr. Dithers - Yup.
You - Aye-Aye Capt'n.
[some time later]
Jim - Why the $%#$%#$5 aren't you working on my request
You - Talk to Mr. Dithers
And remember, if EVERYTHING is a top priority, NOTHING is top priority.
Edit:
Further to FogBUGZ (since you asked), it's more than just issue/bug tracking. You have features requests, inquiries and schedule items too (you might have to upgrade to the latest version to take advantage of those). Use it to its fullest.
Make sure that you are including your feature requests in FB. The more you dump into FB the better the picture it will give you. Remember to use the filters and the grid views.
FB's got a wiki. Post your groups high level work list there for all to see.

Agile 40-hour week [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Have you ever worked on a (full-time) project where using Agile methodologies actually allowed you to accomplish a 40-hour work-week? If so, what were the most valuable agile practices?
Yes, I'm on a 40 hour (actually it's 37.5 hours or so, that's what my contract says) on a project that was run with SCRUM from the beginning. That was about 2 years ago and the first time we implemented SCRUM. It's the project with the least amount of overtime for me personally, and it's also a PC game we're developing. I'm not even in "crunch" mode right now even though we're shipping an open beta on Friday.
We have learned a lot since then about SCRUM and agile. The single most valuable lesson from my point of view is: pod sizes must be reasonable ... we started out with pods with 12-20 members, that didn't work out well at all. A maximum of 10 should not be exceeded. It's too easy to agree on "flaky" and "vague" tasks because otherwise the standup & task planning meetings would take too long. So keep the pod size small and the tasks specific and get the product owner or sign-off's together with those who will work on the task.
Also, with a bi-weekly task planning schedule you have to get every Product Owner to agree on the task list and priorities for the current sprint, and new task requests should be issued before that planning meeting or else it will be ignored for the current sprint. This forced us to improve on inter-pod communication.
Scrum and management that is willing to buy into it.
Fair sprint planning. When you negotiate your own sprint you can choose what your team can accomplish rather than have tasks being handed down from above. Having your sprint commitment locked in (management can't change it mid-sprint) gives freedom from the every changing whims of people.
A well maintained, prioritized backlog that is maintained cooperatively by the product owner and upper management is very useful. It forces them to sit down and think about the features they want, when they want them and the costs involved. They will often say they need a feature now, but when they realized they have to give up something else to get what they want their expectations become more realistic.
Time boxing. if you are running into major problems start removing features from the sprint rather than working extra hours.
You need managerial support for you process without it agile is just a word.
Did I mentioned enlightened management?
Not being able to complete the tasks in a 40 hour week could be due to several things.
I see that this could happen in the early sprints of a Scrum project because the team wasn't sure of:
the amount of work they can do in the sprint and might bite off more than they can chew, and
their ability to estimate accurately the amount of points to award to blocks of work, or
the amount of effort required to perform "a point's worth" of work.
They may also be overly optimistic in what they can accomplish in the time alloted.
After that we get into several of the bad smells of Scrum, specifically:
a team isn't allowed to own its own workload, and maybe
management overrides decisions on what should be in a sprint
If any of these cut in then you are:
doing Scrum in name only, and
"up the creek without a paddle."
There's nothing much you can do apart from correct any problems in the first list, but this will only come with experience.
Correcting the two points in the second list will require a major rethink of how the company is strangling, not employing, Scrum best practises.
HTH
regards,
Rob
It may sound tough, but let's be realistic. Use of agile or any other flavour of a software process has nothing to do with a 40 hour week. Normally the amount of weekly work hours is stipulated within the employment contract and developer can use their discretion to put any additional unpaid work.
Please let’s not attribute magic healing powers to whatever your preferred software process. It can provide a different approach to risk management, different planning horizon or better stakeholder involvement; however, unless slavery is still lawful there you live the working day starts when you come through the door and ends when you go home.
It is as much up to a developer to make sure that the contract of employment is not breached as to their management. Your stake is limited by the amount of pay you get and the amount of honest work hours you agreed to give in return, regardless of a methodology used.
Certainly.
For the me the most important things that helped (in order of importance):
Cross-functional team - having programmers, testers, technical writers and sales/services people in the same team and talking to each other daily (daily call) was great.
Regular builds and continuous integration
Frequent reviews/demos to stakeholders and customers. This reduces the risk and time lost to it only for the period of iteration (Sprint).
Daily Call or Stand up meeting
Adding to all of the above (inaccurate estimates, badly implemented Scrum, etc.), the problem could to be the lack of understanding of your team's Velocity, something as simple as "how much work a team can accomplish", but which is not that easy to find as it may seem.
I've worked at several shops that practices various agile methodologies. The most interesting one had 4 "sessions" throughout the day that were about an hour and a half long, with a 20 minute break in between. Friday was Personal Dev day, so the last two sessions were for whatever you wanted to work on.
The key things for us were communication, really nailing down the concept of user stories, defining done to mean "in production", and trust. We also made sure to break the stories down into chunks that were no more than a day long, and ideally 1-2 development sessions. We typically swapped pairs every session to every other session.
Currently I run a 20+ person dev team which is partially distributed. The key tenant for me is sustainable pace - and that means I don't want my teams working > 40 hour weeks even occasionally. Obviously if someone wants to stay late and work on things, that's up to them, but in general I fight hard to make sure that we work within the velocity a 40-hour week gives us.
As both a Scrum Master and personnel manager, I have been a strong advocate of the 40-hour work week. I actively discourage team members from working over 40 hours as productivity drops quickly as the work-life balance shifts. I have found recuperating from an late-night work day often takes longer than the extra hours worked.
When it is well-run, Scrum aids in minimizing the "cram" that often occurs at the end of an iteration by encouraging (requiring?) a consistent pace throughout and tools like velocity and burndowns work well to plan and track progress.

Sprint velocity calculations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Need some advice on working out the team velocity for a sprint.
Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem.
Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test.
The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days.
How do you guys deal with this sort of situation?
We struggle with this issue too.
Here is what we do. When we add up capacity and tasks we add them up together and separately. That way we know that we have not exceeded total time for each group. (I know that is not truly scrum, but we have QA folks that don't program and so, to maximize our resources, they end up testing and we (the developers) end up deving.)
The second think we do is we really focus on working in slices. We try to pick tasks to get done first that can go to the QA folks fast. The trick to this is that you have to focus on getting the least testable amount done and moved to the testers. If you try to get a whole "feature" done then you are missing the point. While they wait for us they usually put together test plans.
It is still a work in progress for us, but that is how we try to do it.
Since Agile development is about transparency and accountability it sounds like the testers should have assigned tasks that account for their velocity. Even if that means they have a task for surfing the web waiting for testing (though I would think they would be better served developing test plans for the dev team's tasks). This will show the inefficiencies in your organization which isn't popular but that is what Agile is all about. The bad part of that is that your testers may be penalized for something that is a organizational issue.
The company I worked for had two separate (dev and qa) teams with two different iteration cycles. The qa cycle was offset by a week. That unfortunatey led to complexity when it came to task acceptance, since a product wasn't really ready for release until the end of the qa's iteration. That isn't a properly integrated team but neither is yours from the sound of it. Unfortunately the qa team never really followed scrum practices (No real planning, stand up, or retrospective) so I can't really tell if that is a good solution or not.
FogBugz uses EBS (Evidence Based Scheduling) to create a probability curve of when you will ship a given project based on existing performance data and estimates.
I guess you could do the same thing with this, just you would need to enter for the testers: "Browsing Internet waiting for developers (1 week)"
This might be slightly off what you were asking, but here it goes:
I really don't like using velocity as a measure of how much work to do in the next sprint/iteration. To me velocity is more of a tool for projections.
The team lead/project manager/scrum master can look at the average velocity of the last few iterations and have a fairly good trend line to project the end of the project.
The team should be building iterations by commitment as a team. Keep picking stories until the iteration has a good amount of work that the team will commit to complete. It's your responsibility as a team to make sure you aren't slacking by picking to few and not over committing by picking to many. Different skill levels and specialties work themselves out as the team commits to the iteration.
Under this model, everything balances out. The team has a reasonable work load to accomplish and the project manager has a commitment for completion.
Make the testers pair-program as passive peers. If they have nothing to test, at least they can watch out for bugs on the field. When they have something to test, in the second part of the week, they move to the functionality/"user story compliance" level of testing. This way, you have both groups productive, and basically the testers "comb" the code as it goes on.
Sounds to me like your system is working, just not as well as you'd like. Is this a paid project? If it is, you could make pay be a meritocracy. Pay people based on how much of the work they get done. This would encourage cross discipline work. Although, it might also encourage people to work on pieces that weren't theirs to begin with, or internal sabotage.
Obviously, you'd have to be on the lookout for people trying to game the system, but it might work. Surely testers wouldn't want to earn half of what devs do.
First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint.
I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested).
Talk to your scrum master, otherwise there will always be problems with velocity and estimation.
Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal.
You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work.
In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints.
I have heard about this solutions but I had never worked this way.
In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment:
- go through requirements for features to be implemented
- design test scripts (high level design)
- prepare draft test cases
- go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data)
- wrap up everything in test suites
- communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature)
- you can make any necessary changes to test cases as feature evolves
Then when feature is complete you:
- flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears)
- perform tests
- rise issues
.
Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests.
If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint.
Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other.
The solution is never black and white as each sprint may contain stories that require testing and others that dont. There is no problem in Agile of apportioning a tester for example; for 50% of their time during a sprint and 20% in the next sprint.
There is no sense in trying to apportion a tester 100% of their time to a sprint and trying to justify it. Time management is the key.
Testers and developers estimate story points together. The velocity of a sprint is always a combined effort. QA / testers cannot have their separate velocity calculations. That is fundamentally wrong.
If you have 3 dev and 2 testers and you inclue the testers capacity and relate it to your output then the productivity will always show low. Testers take part in test case design, defect management and testing which is not directly attributed to Development. You can have efforts tracked against each of these testing tasks but cannot assign velocity points.

Obtaining Management Buy-in on Process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
The company I work for has historically had very little process as far as software development. Currently we don't really follow any specific method. The problem is of course it makes it difficult to plan, successfully have a decent release or even attract good software developers.
I think I may be able to convince them to do some sort of Scrum process. Key however is getting management/owner buy-in. The idea of locking into specific features for any period of time I think scares them off.
Does anyone have any suggestions on how I can make my case?
So far I plan to:
Give presentation on how Scrum works. how I see it working with the people we currently have and how it will benefit the business.
Ask for training for specific people so we aren't "making it up as we go along"
Set a date to implement, there is some planning and loose ends I probably have to tie up to start a process fresh.
If your projects are like the standard / typical IT projects, then chances are your projects have failed, or been buggy, or cost too much, or didn't do what the customer (internal or external) needed, or took too long to develop.
If you are going to advocate a process, it needs to be shown that you will not lose flexibility just to have structure.
Points to make to decision makers:
Having a Scrum-like process will improve how much information that management has at its fingertips, and allow them to make decisions more quickly. Consider the scenario that you have a 6 month project. Well, with no processes, how do you know how much work is done until it is released? With Burndown charts, you can track how much time is left in a visible way. If you couple that with TDD, where you define say 100 test cases, they can see that 50% of the test cases are left to get working, but from the burndown rate there is only enough time to do 25% (remember Managers like it simple, so this isn't a perfect state of the project, but it is an easy to understand one that was better than what they had before). .e.g. they will feel more in control because the projects have better visibility.
Having process allows you to improve quality, which long term will result in less bugs, less time spent on bugs, more knowledge transfer (what happens if your star developer is hit by a bus), and all this means that the company will get developers focused on a better product than on continuously fixing bugs. e.g. this will save them money
A small set of changes will be implemented first. This will be a proof of concept, and safe and easy to back out of if needbe. e.g. this shows that you are mitigating perceived risk . And you need to mitigate perceived risk because that is what they'll be focusing on. That said, you will want to gather some data before you even make the proposal. Why? Good question: you need a baseline for 2 reasons:
You'll want to know how much the changes have helped. So you can propose more changes.
You'll likely have a manager complain about a problem while the proof of concept is going on. You'll want evidence that shows that problems in a chaotic process free environment are the norm, and this is not a worsening of the state, and perhaps a slight improvement. You can bet on something going wrong in a process-free environment. And you can bet that the proof of concept process changes will be blamed. So be ready for it.
In my experience it's easier to sell management on a design methodology or practice after it's been piloted once. I would cherry-pick a small project, usually internally facing if possible, and ask to "pilot" your new scrum process. Generally it's a lot easier to get people to buy into a pilot because they only have to commit on a limited basis.
As your new scrumified pilot project moves along, be sure to document (post-its, notepads, Word doc, whatever) how scrum is making your project more or less successful than the previous (lack of) method. Be brutally honest here, and try to quantify things in real terms whenever possible.
After the project completes, compile your notes and present to management your findings using the completed project as evidence. Use findings such as:
"product backlog provided users with real sense of progress on featureset X"
"pigs/chickens meetings style saved X man/hours a week by keeping meetings in control"
"sprints allowed developers to work more closely together and resulted in X% less buggy code"
Generally, if you can bring leaders to a spot where they can draw dollars-and-cents conclusions, they will go for a new product or methodology. Also, and this is important as well, be prepared to walk away from your original process ideas if you find them not bearing out during the pilot.
Good luck and happy productivity!
You can sell Scrum as a "No Lose" proposition. Look at what happens when you use Scrum:
All development work is always focused on the highest priority tasks.
Progress is 100% open, and inspected daily.
Users/customers get to examine the progress at the end of every iteration.
Shifting requirements are handled automatically.
The only reasonable objection that I've ever seen to Scrum is that it isn't really possible to predict how much a project will cost, or how long it will take. This is because Scrum acknowledges that everyone will learn as the project commences, and the requirements will change. Waterfall pretends to be able to do this, but we all know how well this works.
Run the Joel Test to determine how much work you have to do. If you are having trouble estimating release dates, look into Evidence Based Scheduling.
Provide some sort of argument that shows how Scrum will address past pain points experienced by the key decision maker. Extra points if you can also provide evidence that demonstrates this.
Keep in mind that it is also possible that you don't have a process because the management doesn't know and doesn't care about it. If your managers have no interest or no understanding of a process, such a process could also be started by getting all the programmers to agree to it (or at least team leaders) and telling new employees, "this is how things are done." Of course, it is necessary that you pick a process that is compatible with your manager's requirements if you do this (e.g. if your managers ask for daily updates on milestones, don't pick a process that has no coding for the first two weeks).
This is really only appropriate if you have a discussion with a manager and their basic reaction is "It doesn't matter, as long as you keep writing code." If you present a process as being a means to redistribute order of work done rather than as one which adds new work, you're more likely to succeed in such an approach.

Resources