How small is too small for a project plan? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have friends who have asked me to make websites and most are very small, usually I don't bother with a technical plan but one friend in particular clearly had goals larger than my own and the project is dragging on forever. If I had made a spec before the project I feel like this wouldn't have happened and our relationship would be as solid as before.
So my question is, how can you tell how small is too small? How do you tell when the project you're embarking on is going to end up in a guilt-ridden scope creep nightmare?

If you are going to be charging money (or don't want to be stuck doing the project forever), a project plan is always a good idea. Even if it's just a one-pager outlining what the web site will have (how many pages, any special features) and who is responsible for what. You should factor in that you'll spend 20% of your time (or whatever percentage past experience has taught you) on documentation or non-coding type work, you can give a better estimate of the effort needed. If it's a friend, you might want to tell them that you'll do the first X hours for free, but after that your rate is $Y per hour. Also, keep an accurate log of the time you've spent so that you can show them the amount of effort that is involved. Also, keeping an accurate log helps you estimate future projects.

As you may have already figured out, no project is too small to have at least an informal, written plan. Even if it's just a features list.

A project that does not need a plan is a project that does not need to be even started. In my opinion everything needs a plan, what changes is the extent of that plan. A plan could be just a list of deliverables and some deadline attached to each one. A more robust plan should include time charts, cost, phases, communications howto, dependencies, etc. So I think everything needs a plan, the contents of the plan is what changes depending on the project complexity.

Dwight Eisenhower on planning:
In preparing for battle I have always
found that plans are useless, but
planning is indispensable.
It seems the same in many software projects: you'll find that your plans need to be continually updated and that your first plan was quite different from what you finally completed. But that's okay, it's much better to put some planning in up front than to try something by the seat of your pants.
Agilists try to accommodate such changes in plans by breaking longer term plans into small "sprints" of 2-4 weeks. They'll have more details on the near term sprints, and fewer details on the longer term goals.
You'll especially want to be more detailed and precise if the project is bigger, if you are doing this for an external customer, or if you're attempting something new for you. It's less important (though not unimportant) for smaller projects and types of work you've done before and are very familiar with.

Related

How you assure software's code quality? Is it worth it in an agile environment? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
First of all sorry for the codeless question, but I like to clarify one thing.
I have a senior developer in a team who is actively pushing for code quality - merge request reviews, no crappy code and similar. But most of the other guys in the team has - get shit done mentality. Me as a business guy, I do not check the code at all, but If I would not have that guy who cares about the quality - in my opinion we would hit some heavy refactoring cycles at some point.
But of course there is a downside for caring carefully about the quality too much - it just takes time to do that. Maybe we will have to throw a lot of beautiful code when we will have to pivot as business needs changes.
Two questions: a) how do you keep the quality of your product? What practices do you use? b) Where is the line of caring about code quality enough (not too less and not too much)?
Code quality is important independent from whether you develop agile or not. You are absolutely right that quality improvement requires additional time. Most people fail because they spent their time in larger blocks ('refactoring projects') to more or less clean up code in arbitrary places with the objective of reducing the number of quality issues as much as possible.
The process I advise to use is to follow the boy-scout rule of always leaving the code that is changed a bit cleaner (better) than it was before. That means whenever you change a function, procedure, method or other code unit, fix the quality problems in it. The advantage is that you already understood the code (because you had to change it anyways) and it is going to be tested (because you need to test your original change). That means the additional effort for quality improvement (adding a comment, improving identifiers, removing redundancy, ...) is very low. In addition, you are improving only code that you are working with and don't waste time improving code that is never touched anyway.
Following the boy-scout rule ensures that the quality does not decrease, but instead steadily increases over time. It is also a reasonable level of 'caring'. I wrote some more about this here. In addition you need a good quality analysis tool like Teamscale that can reliably differentiate between legacy problems, new problems and problems in the code you recently changed.
Get and enforce a good unit testing framework and back it up with automated integration testing.
As for Agile I found the morning 10 minute scrums to be useful, but the end-of-sprint meetings tended to be long.
I made good experience with Sonar Qube a tool we are using for static code analysis. So we can keep track of
code smells, etc. in our code base. Another point is, that fixing issues can be planed in sprints! An IDE integration is available as well!

organizing code and how to hit deadlines in a programming deadline [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know this may not be exactly a coder question, but I feel it is still related to programming because I'm sure many developers have come across this before and might have some insight on how to resolve this or have advice. There is an actually programming question though.
My issue as a developer.
I work in a small company, roughly 15 people, 5 of which are developers include myself, the rest are tech support and management. Problem I'm having is, when we get a SOW (Statement of Work), our clients give us a rough description of the project they are requesting, which usually is a 1-3 page brief description, usually including a Visio document, now as a programming, I'm responsible for going over the document and relaying a time-line on how long it should take me to complete the project.
Unfortunately, there have been times, not only me, where we under-estimate the project because we didn't fully get into it till we actually developed it, which ends up slapping ourselves in the face, because my boss is upset because he is being hounded by the client, who is now upset because we missed our promised deadline.
My question is, how do you guys handle organizing basic project description when you need to give deadlines on more concept, and do you have any ideas on how to organize it.
I'm thinking of going to my boss and suggesting, instead of always pushing a estimated deadline to our clients which expect us to hit that, we should write up a detailed document that is more step-by-step (more like what to do) on how to develop the application they want, it may take a lot more time, but least if the project is moved to someone else it is laid out for them, and when I usually get back to it 4 months later, I don't have to refresh up again, I can just follow the steps I wrote.
What do you guys think? Ideas? Or better ways to handle this?
If you switch your development to using an iterative methodology (Agile, XP, Scrum, etc), then the customer will see results much earlier than any deadline you feel you have to promise - usually every 1 or 2 weeks.
The moment they see what you've developed, I can pretty much guarantee that they'll make changes to their initial requirements as they now have a visual representation of the product and it may not be quite what they were thinking of. Some of their changes might be quite radical, so best to get the feedback as early as possible.
In all the projects where i've insisted we do this, the customer was delighted - they saw the results early, could influence the project outcome, and we hit their end deadline. Unexpectedly, a whole load of features got left behind and - guess what - the customer did not mind at all as they got the top features they wanted and put the project/product straight into production as they'd had lots of time to refine it to suit their business, so they were already familiar with it.
It takes a lot of effort to get management, sales, creative, etc, to all buy-in to an iterative style, so you may need to implement a hybrid solution int he mean time, but in my experience, it is well worth it.
If a complete shift to iterative is not possible, split your project into tangible milestones and deliver on those milestones. As others have said, inflate your estimates. My previous manager doubled my estimates and the sales team doubled his too.
Inflate your project deadlines. It's something that most programmers should do (and I quote the VP of Freeverse, the company that I work at):
It is a well-known fact among people
who work in the software industry that
the last 5% of development always takes the longest.
If possible try to divide the higher level tasks as much as possible so that you can get a better approximation of how many man hours that sub-task would take.
Also, adding hidden buffers to your task execution helps in covering some of the unseen contingencies.
cheers
If you mock up (balsamiq or whatever) with your customer, you will get more details. Armed with those details and some experience, your estimates will be more accurate. And then double it and add 4 (hours,days,weeks,months)
First, unless you systematically under-estimate, your boss should not get upset. It's his job to answer to the client, and he should know that by definition, an estimate is NOT the future. Statistically, sometimes you should deliver earlier, sometimes later.
Personally, I think that the frame of "how long will it take" is not exactly the right discussion to have. Software development is a risky business, and change/surprises happen all the time. One approach which helps is to focus less on the "right" number, and more on the volatility. Look at the project, and consider the places where you are pretty clear on how long it will take (you have done it before and understand it well), and look at the places where you have uncertainty (unclear requirements, new technology), and for these, think about how bad it could go, and why. That will help you get not one number, but rather boundaries: what you think is reasonable, a worst-case scenario, maybe a best case scenario (which the client should never see :) ) - and convey that information to your boss, so that he can manage accordingly.
Additionally, this will allow you to identify the danger points of the project, and you can then prototype accordingly - look into the uncertainty points as early as possible, so that you can tighten up the timeline fast, and have early warnings for your boss and the client.

Has Crashing or Fast-Tracking a project schedule ever worked? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I posted this question on Reddit Programming and did not get a single response. So I am hoping that Stack Overflow community will have an opinion.
Have any of you ever been on a software project that had fallen behind, where 'Crashing' or 'Fast-Tracking' the project schedule actually brought the project schedule back on track? I have never seen either of these project management techniques actually work. And all the articles on software development that I have read all state that these 2 techniques do not work and actually pushing the project further behind (for example literature on the Mythical Man Month). So who has seen it work?
Thanks Bill.
I have only ever seen it work once. It was a three or four month long project that was projected to run an extra two months over the original delivery date. The project got fast-tracked and things ended up getting back on track for the release.
...keep in mind though, that was only once. I've been on many more projects where the PM tried to use one of those two methods and they failed miserably and dragged the project out for months beyond already extended date.
It can work. But there's a price to be paid: lower quality (more bugs, less testing) and turnover of burned-out programmers.
And in many cases, a fast-tracked project will both fail to deliver on time and will still pay the full negative price, for the reasons stated in Mythical man-month.
I've seen it work but it's not the norm.
Things I'd want to see before I thought it might be feasible:
1) Staff available with suitable skills and approach. By that I don't mean ".NET programmer", I mean detailed technical skills, business domain skills (so they understand the problem), personality fit and understand the tools and the approach (source control, methodology and so on). This can happen in large companies where there are common tools, standards and knowledge but you need to be sure that they're ticking pretty much all the boxes.
2) Tasks must be nicely divisible. The best situation is where there are whole modules, applications or tasks unstarted and you can put new people on that. It minimises upskilling, additional communication and so on. If you can't separate out what the new people will do you're likely to majorly disrupt the existing team.
3) The whole team must have bought into the approach. If the existing team don't agree that bringing people on board will be right they'll likely fight it and you're doomed.
4) You need to be sure you've addressed why it was running late in the first place. If it was just bad estimates then are you confident the new estimates are good? If it was scope creep have you got the scope and change control in hand now? If it was because the deadline moved, are you sure it won't move again?
If you can't tick all four of those off, it isn't going to work.
Crashing and Fast-Tracking are two very different things...
Fast Tracking is where you take something (tasks or work packages) out of sequence and do it early. This may because of hardware delivery lead times, availability of resources, risk or whatever. So you might do things in parallel where originally you had planned to do it sequentially. I've fast tracked a lot of projects.. and yes it works.
Crashing a project is different in that you typically throw more resources at a problem to get it done quicker... this can be tricky. If it's done as a crisis response it can be painful adding extra people as you are already under the pump. In some situations you just add more problems.
Another alternative to crashing is to reduce scope. This is not always possible, but it should be considered.
With fast tracking or crashing... the sooner you know when you need to make a schedule change the easier to manage. This is why early deadlines are so important, they indicate how the rest of the project will go.
Both of these project management techniques work well to maintain a schedule, but they should be used intelligently by judiciously analyzing the network diagram:
study the variance,
study lead and lags;
decide what suits to your project: ‘Crashing’ or ‘Fast-Tracking’.
There is a software management principle that says adding manpower to a late project makes it later.
That said, as long as the measures taken are sensible it should be ok. Don't expect too much of your staff and provide reasonable incentives and don't take short cuts. It won't make miracles happen but if you're practical and want to push things just that little bit faster it can definitely be done.
When people have a stake in the potential success of something it's amazing how much more effort they're willing to put in.
It depends on what you mean by "work". I don't think I've ever seen it make a way late project deliver on time, if that's what you are asking.
However, I have seen it make way late projects deliver only a bit late. From the fuzzy perspective of management, that might be called "working". I've also seen it significantly lower the customer-based pressure on the company. Some might also call that "working".
Of course the price is rather high. Employees burn out, develop health problems or big problems in their neglected personal lives, etc. All of that has large financial repurcussions to the company. So I doubt the company comes out ahead in the long run. Is that "working"?

When do you blow the scope creep whistle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Most people have been here at some point or another - in your project, you get really small requests along the way that you're happy to take care of, but at some point the little things add up. Sometimes it takes less time to implement something than it does to re-negotiate the project plan.
Providing the spec/requirements plan is decent and it isn't a doomed project to start with, at what point do you actually blow the whistle and start re-negotiating? At any request? When that request requires additional pages / forms? Or just feel it out? Would love to hear how you make the call.
Budget N hours of ad-hoc requests in your project plan. (You know it's going to happen, so why isn't it in there?) Then track your ad-hoc requests and renegotiate when the budget's blown.
At any request?
The real goal is to make the customer happy while not getting ripped off, right? Agile methods address these issues to a large extent. New requirements always come up, and if you don't address them as they come you end up building things that are obsolete or dysfunctional out of the box. So what you need is customer buy-in to the process, a working prototype as soon as possible, and lots of iterating. There's a ton more, of course, but that should be enough to be getting on with.
Edited to add: Customer buy-in means they are aware of you working on a new feature instead of whatever you would be doing, and that they are in agreement. When you've gone through your schedule and budget and still aren't done they they've been there with you the whole way and know why. No big surprise "What? You're not DONE?!"
I'd say when it's going to impact the schedule/release date. If that happens, it's definitely time to blow the whistle. If either the scope creep is of sufficient magnitude, or if there are enough cumulative changes that it's impacting your ability to ship on time, then you should push back.
The moment your budget gets blown. You can't keep doing all these "freebie" add-ons - unless you are doing it for charity.
Once you've put your foot down once, you'll find the requests drying up!
I have only been in this situation with internal tools where our stated goal was to best serve any whim of our "customers" in a situation where there was no way to predict needs in advance. So take my answer with a grain of salt.
My view is that the decision is often political, and unless you're the head of the company it might not even be up to you. The cost of unsatisfied customers going over your head to your boss can be more damaging.
I'm a big believer in agile and continuous requirement gathering that does involve seeing how users work with the product, and trying to match their needs. However, every user has his individual "nice to haves" and there's no way to satisfy everyone. If you have multiple target users, democracy is a good system - only implement things that the majority of the users can benefit from.
If your clients are a cohesive group (e.g., you're making it for users in a specific department in a specific organization), run a Wiki site or something like SO or other engines where they can list and then collaboratively vote on possible features. Make it clear that you will give priority (but no guarantees) about higher rated features, and that you're probably not going to give priority to things that don't get votes from others.
In doing so, you may be able to get the clients to apply some collaborative filtering (or peer pressure) on ideas. You will also get some visibility, so people can see why their wishes were not respected. An important side benefit is that whoever requested a feature now has an interest in formulating the request and its rationale well, so that they can get others to vote for them. This will eliminate some asinine half-baked ideas.
Of course, an underlying assumption of all this is that you budgeted some time to "misc features" with whoever is paying for the projec.
The estimated completion date is more of a probability curve than a single date.
Any extra feature reduces the likelihood of meeting some particular date.
You should 'blow the whistle' if and when the decrease in likelihood becomes 'significant' or worth mentioning.

How do you refine your estimation process? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Estimating how long any given task will take seems to be one of the hardest parts about software development. At my current shop we estimate tasks in hours at the start of an iteration, but once the task is complete we do not use it to aide us in future estimations.
How do you use the information you gather from past estimations to refine future ones?
By far one of the most interesting approaches I've ever seen for scheduling realistically is Evidence Based Scheduling which is part of the FogCreek FogBugz 6.0 release. See Joel's blog post linked above for a synopsis and some examples.
If an estimate blew out, attempt to identify if it was just random (environment broke, some once off tricky bug etc) or if there was something that you didn't identify.
If an esimate was way too large, identify what it was that you thought was going to take so long and work out why it didn't.
Doing that enough will hopefully help developers in their estimates.
For example, if a dev thinks that writing tests for a controller is going to take ages and then it ends up taking less time than he imagined, the next estimate you make for a controller of similar scope you can keep that in mind.
I estimate with my teammates iteratively until we reach consensus. Sure, we make mistakes but we don't calculate the "velocity" factor explicitely but rather, we use gathered experience in our new estimation debates.
I've found that estimating time will get you so far. Interuptions with other tasks, unforseen circumstances or project influences will inevitably change your time frames and if you were to constantly re-asses you would waste much time managing when you could be developing.
So for us here, we give an initial estimation based on experience to the solution for time (we do not use a model, I've not found one that works well enough in our environment) but do not judge our KPIs against it, nor do we assure the business that this deadline WILL be hit. Our development approach here is largely reactive, and it seems to fill the business' requirements of us very well.
when estimates are off, there is almost always a blatant cause, which leads to a lesson learned. Recent ones from memory:
user interface assumed .NET functionality that did not exist (the ability to insert a new row and edit it inline in a GridView); lesson learned is to verify functionality of chosen classes before committing to estimate. This mistake cost a week.
ftp process assumed that FtpWebRequest could talk to a bank's secure ftp server; it turned out that there's a known bug with this class if the ftp server returns anything other than a backslash for the current directory; lesson learned is to google for 'bug' and 'problem' with class name, not just 'tutorial' and 'example' to make sure there are no 'gotchas' lurking. This mistake cost three days.
these lessons go into a Project Estimation and Development "checklist" document, so they won't be forgotten for the next project

Resources