How to prioritize future features (enterprise web development) [closed] - project-management

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose you're the product manager for an internal enterprise web application that has 2000 users and 7 developers. You have a list of 350 future features, each ranging from 5 to 150 developer days of work.
How do you choose what features to work on, and how do you run the release process?
Here's what I'm thinking: (skip if boring)
Release Process. Work on several features at once, release each individually when it's ready. The other option (what we've been doing up to this point) is to pick out a certain set of features, designate them as "a release", and release them all at once (announcing via mass email).
The advantage of a shorter release process is that we can release features as soon as we finish development. The advantage of a bigger process is that it's easier to organize.
Feature Prioritization. Put all the future features in a spreadsheet with columns for feature, description, comments, estimate, benefit, (your) estimate, (your) benefit. Give copies to 2 senior engineers, the other senior project manager and yourself.
The engineers estimate all the features (how precisely? consulting each other?). To determine benefit, everyone allocates points (total = 10 * [number of future features]) among the future features (without consulting each other?), compare scores and average (?).
Another potential strategy here is to just rank each feature on an absolute (say) 1-100 scale. Having an absolute ranking is nice because it makes prioritizing as our feature list changes easier (we don't want to have to redistribute points every time someone proposes a new feature).
What's your strategy? Do any books / websites attack the problem at this level of detail?

There's a great book that helps cover this topic called Agile Estimating and Planning by Mike Cohn. It has some great ways to estimate and plan releases. Including a planning game called planning poker where the engineering team gets together with cards to estaimate user stories. Each engineer plays a card 1,2,3,5,8,13 face down. The high and low card explains and you do it again. After 1 or 2 repeats there is generally convergence on the same estimate.
There's also Beyond Software Architecture: Creating and Sustaining Winning Solutions by Luke Hohmann which might help with some of the product management related pieces and the reasoning to use to prioritization. I have not yet read the book but I went to a talk by Luke Hohmann where he covered the subjects of his book and I can't wait to read it.
Also I would recommend reading books on various Agile Development processes such as Scrum, Crystal Clear, and XP. There's Agile Project Management with Scrum by Ken Schwaber and Crystal Clear: A Human-Powered Methodology for Small Teams by Alistair Cockburn. Also Extreme Programming Explained: Embrace Change (2nd Edition) by Kent Beck and Cynthia Andres.
As for feature prioritization, that is generally done by the stakeholders. You need to work on the features that address the needs of your stakeholders, which, as Luke Hohmann points out, includes the system architecture.
However, one of the most important things is to make sure that you have agreement on the software development process from the team. If you force a process and the team doesn't believe in, then it will not work.

Surely you don't have 350 independent features, some must depend on others.
Put them all into some task management software which allows you to define which tasks depend on which other ones, and you might soon find that you've got a much easier decision process...

As for the release process, you could introduce the features when they are ready and inform the users via a company blog that is updated whenever a new feature is done. Such a blog entry should then give a short overview about the feature, where to find it, how to use it, etc.
Not only does this keep your users curious and coming back, it also offers a great way of potential customers to check out the progress of your offering.
As for prioritizing future implementation: how about involving the customers there as well? Look at uservoice (it is used to track requests/bugs for this site). It offers a nice way of letting the users vote on most desired things as well as showing what is being worked on and what is planned.

"rank each feature on an absolute (say) 1-100 scale"
Build them in order.
Release them when you've got (a) significant value or (b) critical mass of small things.
Always work in priority order. Build the most important stuff first. Deliver as much value as quickly as possible.

a few people here have already said it - involve the end users in the decision process of what goes in and what waits. after all, its not about whats useful to you, but whats useful to your end user.
that said, i wouldnt leave it open to 'all users to decide'; there should be a representative from the user group who you work with (i.e. senior user role).
even then, you arent saying "what features do you want?" to the user, you ask them what functionality they would like to see arrive next. the reason why you put it to them that way rather then letting them pick off a massive spreadsheet of individual features is two-fold: 1) they dont know about dependancies, 2) you want to gather together a pack of features for a logical release.
so the user representative may say "we need to have the photo gallery working next". they might not be aware that the photo galery is practically the same as the file upload module (it just accepts different file types).
so, in the next release version, you pack together the photo gallery and the file upload - why wouldnt you, considering that the file upload is like 75% done because of the work that went into the photo gallery module?
i dont believe you necessarily have to work on the hardest features first, its what the users need sooner + what other features you gather together to make a 'logical pack'.
to a certain extent, you want to clear the feature log too. so for example, you could have the following features and estimaed times:
Registration Form - 3 hrs
Photo Gallery - 8 hrs (<- client has said they want this next)
File Upload - 2 hrs
Voting/Poll module - 7 hrs
Stock Ticker - 5 hrs
out of these contrived features, i would take no. 2 (because the client is asking for it), then i would take no. 1 and 3. no. 3 because its practically done when the gallery code has been done, and no. 1 purely because its the smallest estimate out of the remaining features. nothing will give you or your coding crew the feeling of progress on your project like seriously beating down the feature list (it will probably refill though).
as far as letting people know about a new release and whats in it, i would do it via email (rather then by blog or within the program itself). and i would make it as brief as possible, bullet points, something like this:
===
Version 1.1 of Blue Widgets has just been launched and is available for your use now.
The following has been added:
Photo Gallery
File Upload
Registration Form
The user manual within the system contains more information on how these features work.
===
bang - done, make it as easy for people as possible.
LM

Related

Adding more structure to our development processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work with a small team (4 developers) writing firmware and software for our custom hardware. I'm looking into better ways to organise the team and better define processes.
Our Current Setup
Developers are generally working on 2-3 projects at a time.
We have projects that work in an iterative sort of way, where a developer is in regular contact with the customer and features are slowly added and bugs fixed.
We also have projects with fixed delivery dates, and with long lead times, final hardware might appear only a few weeks before delivery. The fixed projects are usually small changes to an existing product or implementation and the work is somehow intermingled.
We are also moving from consulting to products, so we are occasionally adding features that we think will add value, at our own cost.
The Issues
We have a weekly meeting where proportions of time are allotted to each project. "Customer A wants to test feature X next week", so the required time is allotted. "Customer B is having issues with Y, could developer P drive down and take a look?", etc.
When we're busy, these plans are very loosely followed. Issues arise and lower priority stuff gets pushed back. Sometimes, priorities are not clear to developers so there is friction when priorities appear to change. The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Developers are generally working on 2-3 projects at a time.
Multitasking is incredibly inefficient. Switching the brain from one task to another requires time for the gears to change over.
When we're busy, these plans are very loosely followed.
Then why create plans at all?
Is it all possible to dedicate just one developer to one task / product / customer? So developer P is the only one who talks to customer B? (Certainly the developer would need to document exactly what he's doing in case he gets hit by a bus, but he should be recording issues and roadmaps anyway.)
The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
If there had been only one developer on project Z anyway, he wouldn't have been distracted by customer A's problems.
Don't think in terms of a pool of developers serving a pool of customers, think of one developer for a given customer. (This can make vacation planning a little tougher, but if you're constantly pulling all-nighters, you aren't spending enough time away from the office anyhow.)
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Aren't we all.
"Customer A wants to test feature X next week", so the required time is allotted.
Allotted by whom?
Do you create your own schedules? If not, the only response to management creating a schedule for you is all-nighters.
Realistic non-all-nighter schedules will bother management. Until you can prove that your customers want a better schedule with fewer all-nighters, there isn't much you can do.
The only way to reduce the all-nighters is to get stuff done sooner. But if the hardware doesn't arrive sooner, there isn't much you can do, is there?
Two thoughts: drive quality and improve estimates.
I work in a small software shop that produces a product. The most significant difference between us and other shops of a similar size I've worked in a is full time QA (now more than one). The value this person should bring on day one is not testing until the tests are written out. We use TestLink. There are several reasons for this approach:
Repeating tests to find regression bugs. You change something, what did it break?
Thinking through how to test functionality ahead of time - this is a cheek-by-jowl activity between developer and QA, and if it doesn't hurt, you're probably doing it wrong.
Having someone else test and validate your code is a Good Idea.
Put some structure around you estimation activity. Reuse a format, be it Excel, MS Project or something else (at least do it digitally). Do so, and you'll start to see patterns repeating around what you do building software. Generally speaking include time in your estimates for thinking about it (a.k.a. design), building, testing (QA), fixing and deployment. Also, read McConnell's book Software Estimation, use anything you think is worthwhile from it, it's a great book.
Poor quality means longer development cycles. Most effective step is QA, short of that unit tests. If it were a web app I'd also suggest something like Selenium, but you're dealing with hardware so not sure what can be done. Improving estimates means the ability to attempt forecasting when things will suck, which may not sound like much, but knowing ahead of time can be cathartic.
I suggest you follow the Scrum Framework. Create a Scrum Environment with an enterprise product. Have product Teams working on the features for their own individual products, which is a part of the combined enterprise product. If you have the resources have a production/issues support and infrastructure Scrum Team. If the issues are coming your way too quickly, have the infrastructure Team try following Kanban or Scrumban.
The Scrum Framework in itself will solve most of your problems if adopted properly.

Sustaining early releases of a product [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The context is as follows: Enterprise software developed without enough direct customer involvement. We did not develop this software for a particular customer but to fill in a market gap. We worked the core requirements with major customers only, more customers jump on now. Mandated deadlines, requirements changing, little time to design. Fun time! :)
We got the first release out of the door. Then we got second release out of the door (luckily in a more organized fashion)
Most problems that the sustaining engineering is facing for both releases are what they call 'design bugs' rather than good old code defects.
In general these 'design bugs' are such that a feature or part of a feature behaves as designed but that behavior is not what some customers want the product to do. It is not that all customers have these problems - each customer is different and what is enough for one is not for the other.
This makes me wonder about several things and I could really use an insight from y'all with more experience.
Here are some esoteric questions:
How much do you think is this a common phenomenon in product lifetime?
How much do you think did the context contribute to this?
What is/was your experience and context?
This is absolutely common that needs differ from client to client and that they want to drive the product in different directions.
There are three options for any given change:
1) You don't do it - they've bought product software, they have to live with the product. I wish Word did somethings different but I paid a couple of hundred pounds for it rather than having a bespoke word processor built from scratch so I have to live with it.
2) You branch the product and have two different versions - as often as not this is the worst thing to do. As a software house your model is dependent on many clients contributing to a common code base. Having multiple versions significantly increases costs (every bug fixed twice, two manuals, etc. etc.) and breaks your business model. Again, if they want bespoke software built exactly to their requirements then they need to pay for that - you don't get bespoke software at package prices.
3) Customisation (potentially as an option / module / configurable setting) - this can work but you really need to think about whether it's the right thing to do for your product. Each extra options massively increases the number of ways in which the code can interact and the number of tests which have to be carried out so there is a significant cost attached. In the enterprise space you will have to accept that clients will make demands in this area but you need to accurately assess the consequences and costs (one off during development and on-going for support) and make sales and management aware of them.
But essentially they all come down to the same thing - product software, even on the enterprise level, is far far cheaper than having an in-house team (or consultancy) build something bespoke. That price advantage comes with a downside - it's that you don't get exactly what you want and that the business needs to flex to the software sometimes.
It's not usually a popular message with clients or with sales but you need to work out which market you're in (product or bespoke) and remember that when making decisions.
In terms of the other two bits of the question - I don't believe the context created it at all. The root of it is that organisations are different. Unless you have all your clients the same, it was always going to be a problem at some point. Maybe it's a bit worse than it might have been but probably less than you think.
My experience in this area: I've been on both sides of the fence. I've been a development and / or project manager commissioning enterprise (and non-enterprise) third party software products (portals, finance systems and travel booking systems) and I've worked for two software houses developing them as a development manager (which is currently what I do).
Enterprise software developed without
enough direct customer involvement
in such cases, this will be a common phenomenon in product lifetime. If the customer is not involved and you don't know what and how he wants things, you'll be up for quite a bit of disappointment when you deliver the product and you find out his reaction.
How much do you think did the context
contribute to this?
I think it's the main cause.
What is/was your experience and
context?
Similar context, up until about half-way across a project's development period, when delivering an intermediary product, I found out that many of the customer's expectations were quite different than what I had in mind. I guess it was a good idea to send something intermediary for approval, this way I had much less to modify than if I were to send a final product that would not meet customer expectations, so I suggest you keep a connection with the customer from time to time, and get him to approve features before you move on to new ones. This way, when the final product is ready, it'll be what the customer has seen and approved step by step all along.
It depends how long-lived the product is: the longer the life-time, the more evolution is possible and/or required.
For example I helped to sustain one software product from 1991 to 2003; and at the end, it was hardly anything like at the beginning:
It started as an assembly TSR for DOS, implementing modem-sharing for small (e.g. lawyers') PC-LANs.
It ended as a distributed service for NT, implementing least-cost fax routing for several telcos.
It was sold throughout this time, several releases per year; it was what customer wanted, but the customers (and their needs) changed over time, as did the underlying O/S, the competition, etc.
This is why you create API's. I've also seen enterprise level applications that allow users to create their own vb/java scripts behind forms and inside reporting tools. Yes, embed a reporting tool writer and don't try to make all of them yourself.
Enterprises are notorious for their desire to have massive amounts of features in every app. Even within the same company, you can get multiple ways of doing the same thing. I their defense, time is money, so when you save a 1000 users a click, it can add up. On the other hand, they also have people with too much time on their hands to think of every possible piece of data they may ever have to track in the entire lifetime of their company and will want them all in your app. They have the money and are set in their ways a lot longer than say a startup.
When you deliver something the customer did not want, you have failed with requirements engineering. Since this is the first stage of software development, and design, coding and testing are based upon it, bugs in the requirements are the most difficult and costly to fix.

What comes between the vision and a product backlog in scrum? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Not looking really for a book. I have seen lots of references and links to them. I can't buy one right now. I have been reading online, watching videos, etc. One thing so far I don't get. What comes between the vision (solution to the problem) and the product backlog. From what I read, I think it is user stories but I am not sure.
Is there anything online that shows all the steps in a linear fashion from vision/concept to the end?
Thank you for any direction.
EDIT: On the requirements gathering, just use Excel?
User stories and a lot of negotiation over what's essential and what's fluff.
A lot of negotiation.
Also a lot of back-and-forth on architecture. Scrum requires a stable, proven architecture. However, there are always upgrades and enhancements. How do those fit with the backlog? That's a lot of political jockeying between product owner, technology folks and (to an extent) users/buyers.
The process in inherently non-linear.
It's more like crystallization. You have a solution, you start to write stories, you have a technology vision, you have a team with certain skills and experience.
Any one of these features can serve as a "nucleus" for deciding what goes into the backlog and in what order. Eventually, something becomes the nucleus and the mixture crystallizes. Sometimes the cost or schedule or risks are unacceptable, so you heat it back up, find another nucleus and see if it crystallizes acceptably around that new nucleus.
The recrystallization happens after each sprint, by the way, making it even less linear.
Edit. "stable proven architecture".
Question: Who pays for learning the new architecture?
Answer: Ha ha. There's no good answer. So be careful how much architectural learning you do while you have development sprints going on.
If you don't have an architecture in place that (a) works, and (b) can be articulated by almost everyone on the team, you're going to spend time assembling that architecture.
What does the time and cost of creating an architecture do to your first sprint?
You have to incorporate architecture development into the first sprint, delaying things.
Let's say you decide to implement a LAMP stack. You don't know whether to unix PHP, Perl or Python. So you pick one. Like Python. And you promise the first sprint in four weeks. So you work for 3 weeks struggling with the kabillion add-one modules and frameworks. After 3 weeks, you think you have a working tech stack, but you don't have the promised sprint.
Do you delay? If so, everyone asks if you've got the pace right and starts doubling the time for all other sprints.
Do you deliver nothing? If so, what's the point of sprints if you have nothing at the end except infrastructure?
You can change, modify and enhance the infrastructure -- in manageable pieces. But to build a fresh architecture, prove out the pieces, train everyone and develop best practices takes time. A lot of it. And that time shouldn't -- really -- be charged as sprint time creating deliverable product. That's overhead time.
Edit. Tooling.
Rule 1. Agile processes don't use a lot of complex tools and processes. That's why I said that the process is a lot of "negotiation". Whatever makes you productive.
Rule 2. Don't over think it. Just do it.
Most folks say -- in the strongest possible way -- use 5"x8" paper cards and stick them to a wall. Seriously. No tools. Just simple paper, markers, tape and blank wall space.
Read this: http://www.agilemodeling.com/artifacts/userStory.htm
You can use a spreadsheet to collect user stories (and epics -- stories that have to be decomposed). You can add columns for complexity (story points), cost, priority and release, and use it for project management.
We use use cases (not user stories) but the tooling is the same. A use case is -- in a way -- a user story with more details up front. But the use case name can summarize how an actor interacts with a system; the interaction can usually be summarized with clear, simple nouns and a verb, which is very much like a user story.
Spreadsheets seem handy because you can rearrange the rows at the end of each sprint. You can do simple counts and sums to work out how much each feature will cost and when they'll arrive.
I don't use a spreadsheet because -- in spite of the GUI glitziness -- I find it a little bit cumbersome. I would feel it necessary to write a spreadsheet extractor that would turn the backlog from an Open Office Org file into ReStructuredText (RST). I prefer RST -- plain text markup -- over spreadsheets.
This is all protracted negotiation. Everything changes as a result of every conversation. That's the point of an Agile method. Quick sprint followed by negotiation over the direction of the next sprint.
Our backlog is a big RST document. We publish all our documentation using Sphinx and it's very, very simple to write backlog, use cases, architecture, design, etc., in RST markup.
Our sprints are simply sections of a big document tree. They're decorated with a few special-purpose interpreted text fields for the subjective things like estimated completion date, and status (in process, released).
What comes between the vision (solution to the problem) and the product backlog.
Nothing. From the Vision, you just create the Product Backlog (PB). Note that the Product Backlog Items (PBI) don't need to be all fine grained, only the most emergent items need to. So, don't hesitate to create coarse grained items at the start, you'll decompoose them into fine grained PBI "just in time" (this activity is known as backlog grooming).
With these 2 artifacts, you can start your project. As Ken Schwaber puts it: "The minimum plan necessary to start a Scrum project consists of a vision and a Product Backlog. The vision describes why the project is being undertaken and what the desired end state is." (Schwaber 2004, p. 68)
From what I read, I think it is user stories but I am not sure.
To be honest, I'm not sure that I'm following you here. The PB is by definition a list of PBIs and creating the PB thus means feeding it with PBIs. Now, User Stories are just one possible formalism for the PBIs (Scrum doesn't force you to use User Stories, they are not appropriate for all projects) so, if you decide to use this formalism, creating the PB will mean creating User Stories.
Is there anything online that shows all the steps in a linear fashion from vision/concept to the end?
Below, one of the oldest illustration of the Scrum framework:
On the requirements gathering, just use Excel?
This would be my recommendation. If you need a sample, maybe have a look at Henrik Kniberg's Index Card Generator. More templates and/or samples at Scrum backlog templates and examples.
The backlog comes after the requirements are defined. The backlog is in a constant state of flux, but ultimately it is the work left to be completed.
Here is a chart: link
You can start by breaking the vision down into a series of epics. These can then live in your backlog as a prioritised list of the "big rocks" of work that need to get donw.
As you start planning each sprint (or a little before), you can break down the epics into user stories and prioritise them.
Google 'user story mapping'. This is a great way to understand a problem from a functional/feature view, and it's the technique I recommend to people who want to build a product but don't know where to start. The input is the vision statement and the output is a prioritized product backlog, plus the model itself.

Under-priced projects (tight budget) - what are the characteristics? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to determine some of the markers that indicate a project of limited resources.
In my experience a project becomes a ‘limited resources’ project because someone was desperate to sell a solution to a client. The results is a tight budget, features are culled and SDLC processes are cut to a minimum. These short-cuts are taken so the company has some chance of making a profit or even breaking even.
This is a list of things which I have seen go hand-in-hand with a project of limited resources:
Bare minimum amount of time allotted to QA
Strict bureaucratic process for off-spec work
Change request budget may be small or non-existent
Formalised processes get dropped in favour of using time for development
No time available for value-add QA like content checking (e.g. grammar or spelling errors in text).
Can’t do any content management or data entry for client
Have to go for ‘good enough’ coding solutions
No time allowance for hallway usability testing.
no budget for writing user documentation or manuals.
Generally no time for technology research before coding
No time to produce a risk analysis document
A production check-list may be used instead of a project schedule.
The is no time for a programmer to fill their ‘actual’ times vs. estimated times in the project schedule.
Progress updates given to clients may be less frequent or very basic
Less time is available to spend on understanding the clients business domain
Programmers may have to work unpaid overtime.
No time allotted for a project post-mortem
What other sure signs are there for a limited resources project?
===
EDIT
i will try clear up some of the confusion with an example. this is what i mean: the client is given a proposal/quote saying their project will cost $20k. the client then comes back and says "sorry, my budget is $16k maximum". the boss says "make the proposal $16k - we want this work".
so, effectively, you have to do a project with less budget then it should have. there are boundaries where it becomes ridiculous - if the client was to say "my budget is $4k" then you couldnt possibly do it.
and yes, sometimes a tight budget can become so silly that it was a bad business decision to accept the project in the first place (i.e. doomed project).
i understand that there is no such thing as a project with unlimited budget. often business people make the decision whether a project should be undertaken (a business person often isnt a project manager).
What you are talking about is not a 'limited resources' project, but instead a rushed and unplanned project.
A few items in your list I take issue with:
Strict bureaucratic process for off-spec work
Change request budget may be small or non-existent
Actually, these should be the norm for most projects. Who's requesting and paying for the changes, you or the client?
Can’t do any content management or data entry for client
No budget for writing user documentation or manuals.
If that's not part of the contract, why are you doing it?
Have to go for ‘good enough’ coding solutions
At some point, you have to stop at 'good enough', or else you are going to be polishing from now until the end of time.
Something I would add to your list are:
Office supplies become scarce or go under lock and key.
Corporate supplied food/beverages disappear
Down time disappears. 100% of your time on your time sheet must be dedicated to project work.
The printer/photocopier is running full-bore printing other staff member's resumes.
The Boss' door is shut for 90% of the day.
Quite frankly, I've never heard of a project that had unlimited resources. Even the government will put the brakes on something after a while.
So, from a pure logic perspective, all projects have limited resources.
Limited resources? All projects that I know of have limited resources:
time
developers
budget
Outdated or obviously beta documentation which doesn't jive with whatever release of the product on site, or documentation which looks like it's been down through several generations of Xerox copies.
No onsite installation or support. Depending on the size of the system being implemented, a company in good shape might send one or more of their developers to oversee the implementation, whereas a company that's tight might offer phone or email support only in a more "fire and forget" approach.
Continuous occurences of new,
forgotten features, assumed as
"obvious" and "implicit" by the user,
never stated in the requirements,
leading to discussion Bug vs. Change
Request.
Waterfall model adopted instead of an
iterative approach.
Customer protesting on the costs of
fixing, saying something like: "If
you have bugs, it is because you
didn't do your job right", not
accepting the tripple constraint
effects on quality when cutting
time/budget.
Pressure for lower prices for
maintenance and support activities
after project deployment in
production.
Pressure for adopting a fixed-price project (outsourcing) to transfer financial risk together with timeline risk.
"Under-priced projects"
If I understand correctly what you mean, you really are talking about projects where the resources available to the project are not appropriate to achieve the results that were promised to the client.
I can think of four ways for this situation to arise:
Wrong estimates when preparing the project plan
Requirements creep
Reducing the project budget without reducing the project scope
Inadequate resources (staff skills, computer resources, etc.) for the project scope
When people in the project become aware of the situation, they really have two options: cut costs or cut scope. Cutting the scope can be a hard sell and may endanger the project viability, so most of the time people opt for cutting custs, especially since cost can be cut in many ways without atracting the attention from the higher echelons:
Unpaid overtime
Reducing quality
Eliminating documentation
and so forth.
In fact, you may even look good as a project manager when you start cutting costs, since cost containment is one of the project manager's responsibilities! I assume that what you want is to find ways to diagnose an under-funded project. I think that instead of developing an extensive list of symptoms, I would strive to identify a general condition.
In my opinion, there is a general condition that allows to pinpoint an under-funded project. For most projects, staff is the biggest cost - or at least the second biggest cost that can be managed by the project manager. Whenever you find an experienced manager taking measures to reduce staff costs and those measures were not part of the original plan, then you can be sure you have an under-funded project.
Regards,
using the information you guys so kindly contributed i was able to pull it all together and write an article
ill put a link here in case someone in the future is looking for help on the topic:
Surviving An Under-resourced Project
--LM

Managing user stories for a large project [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are just starting on a pretty big project with lots of sub projects. we don't currently use any kind of named process but I am hoping to get some kind of agile/scrumlike process in by the back door.
The area I will be focusing on most is having a good backlog for the whole project and, at least in my head, the idea of an iteration where some things are taken from the backlog, looked at in more detail and developed to a reasonable deadline.
I wonder what techniques people use to break projects down into things to go in the backlog, and once the backlog is created how it is maintained and ordered. also how relationships between elements are maintained (ie this must be done before it is possible to do that, or this was one story now it is five)
I am not sure what I expect the answer for this question to look like. I think what may be most helpful is if there is an open source project that keeps its backlog online in some way so I can see how others do it.
Something else that would get +1 from me is examples of real user stories from real projects (the "a user can log on" story does not help me picture things in my project.
Thanks.
I would counsel you to think carefully before adopting a tool, especially since it sounds like your process is likely to be fluid at first as you find your feet. My feeling is that a tool may be more likely to constrain you than enable you at this stage, and you will find it no substitute for a good card-wall in physical space. I would suggest you instead concentrate your efforts on the task at hand, and grab a tool when you feel like you really need one. By that stage you'll more likely have a clear idea of your requirements.
I have run several agile projects now and we have never needed a more complex tool than a spreadsheet, and that on a project with a budget of over a million pounds. Mostly we find that a whiteboard and index cards (one per user story) is more than sufficient.
When identifying your stories, make sure you always express them in terms that make sense to your users - some (perhaps only small) piece of surfaced functionality. Never allow yourself to slip into writing stories about technical details that you could not demonstrate to a user.
The skill when scheduling the stories is to try to prioritise the things you know least about first (plan for what you want to learn, rather than what you want to do) whilst also starting with the stories that will allow you to develop the core features of your application, using subsequent stories to wrap functionality (and technical complexity) around them.
If you're confident that you can leave some piece of the puzzle till later, don't sweat on getting into the details of that - just write a single story card that represents the big conversation you'll need to have later, and get on with the more important stuff. If you need to have a feel for the size of what's to come, look at a wideband delphi estimation technique called planning poker.
The Mike Cohn books, particularly Agile Estimating and Planning will help you a lot at this stage, and give you some useful techniques to work with.
Good luck!
Like DanielHonig we also use RallyDev (on a small scale) and it sounds like it could be a useful system for you to at least investigate.
Also, a great book on the user story method of development is User Stories Applied by Mike Cohn. I'd certainly recommend reading it if you haven't already. It should answer a lot of your questions.
I'm not sure if this is what you're looking for, but it may still be helpful. Max Pool from codesqueeze has a video explaining his "agile wall". It's cool to see his process, even if it may not necessarily relate to your question:
My Agile Wall (Plus A Few Tricks)
So here are a few tips:
We use RallyDev.
We created a view of packages that our requirements live in.
Large stories are labeled as epics and placed into the release backlog of the release they are intended for. Child stories are added to the epics. We have found it best to keep the stories very granular. Coarse grained stories make it difficult to realistically estimate and execute the story.
So in general:
Organize by the release
Keep
iterations between 2-4 weeks
Product owners and project
managers add stories to the release
backlog
The dev team estimates
the stories based on TShirt sizes,
points, etc...
In Spring planning
meeetings the dev team selects the
work for the iteration from the
release backlog.
This is what we've been doing for the past 4 months and have found it to work well. Very important to keep the size of the stories small and granular.
Remember the Invest and Smart acronyms for evaluating user stories, a good story should be:
I - Independent
N - Negotiable
V - Valuable
E - Estimable
S - Small
T - Testable
Smart:
S - Specific
M - Measurable
A - Achievable
R - Relevant
T - Time-boxed
I'd start off by saying Keep it Simple.. use a shared spreadsheet with tracking (and backup). If you see scaling or synchronization problems such that maintaining the backlog in a consistent state is getting more and more time-consuming, trade up. This will automatically validate and justify the expenditure/retraining costs.
I've read some good things about Mingle from Thoughtworks.
here is my response to a similar question that may give you some ideas
Help a BA! Managing User Stories ...
A lot of these responses have been with suggestions about tools to use. However, the reality is that your process will be the much more important than the tools you use to implement the process. Stay away from tools that attempt to cram a methodology down your throat. But also, be wary of simply implementing an old non-agile process using a new tool. Here are some strong facts to consider when determining tools for processes:
A bad process instrumented with a software tool will result in a bad
software tool implemention.
Processes will change based on the group you are managing. The
important thing is the people, not the process. Implement something
they can work successfully in, and your project will be successful.
All that said, here are a few guidelines to help you:
Start with a pure implementation of a documented process,
Make your iterations small,
After each iteration talk with your teams and ask what they they
would change, implement the changes that make sense.
For larger organizations, if you are using SCRUM, use a cascading stand-up mechanism. Scrum masters meet with thier teams. Then the Scrum Masters meet in stand-ups of 6 - 9, with a Super-Scrum-MAster responsible for reporting the items from the Scum-Master's scrum to the next level... and so forth..
You may find that have weekly super-scrum meetings will suffice at the highest level of your hierarchy.

Resources