What does the perfect status report look like? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I work with a lot of offsite developers and contractors. I ask them daily to send me a quick 5 minute status of their work for the day. I have to sometimes consolidate the status of individuals into teams and sometimes consolidate the status of a week, for end-of-period reporting to my clients.
I want to learn:
Items accomplished and how much time was spent on each
Problems encountered and how much time was spent on each
Items that will be worked on next, their estimates (in man hours) and their target dates
Questions they have on the work
I'm looking for a format that will provide this information while:
Being quick for the developers to complete (5-10 minutes, without thinking too much)
Easy for me to read and browse quickly
Is uniform for each developer
What would you suggest?

you probably do not want to hear this, but here is it anyway -
i have been in this situation on both sides of the desk, and come to the conclusion that these kinds of rolled-up status reports are a complete waste of time for you and the developers. Here's why:
the developers should be working on features/deliverables with specified deadlines
the developers should be asking questions when they occur
communication should flow in both directions as needed
if these things are not happening, no amount of passive status reporting is going to fix the problems that will inevitably arise
on the developer side of the fence - a "quick five minute status" [i hate that phrase, five minutes is not quick!] interrupts the developer's flow, causing a loss of fifteen minutes (or more) of productivity (joel even blogged about this i think). But even if it really is only five minutes, if you have a dozen developers then you're wasting five man-hours per week on administrivia (and it's probably more like 20)
on the manager side of the fence - rolling up the status reports of individuals into teams by project etc. is non-productive busywork that wastes your time also. Chances are that no one even reads the reports.
but here's the real problem: this kind of reporting and roll-up may indicate reactive management instead of pro-active management. In other words, it doesn't matter what methodology is being used - scrum, xp, agile, rational, waterfall, home-grown, or whatever - if the project is properly planned and executed then you should already know what everyone is doing because it was planned in advance. And it doesn't matter if it was planned that morning or six months ago.
ignoring client requirements for a moment, if you really need this information on a daily basis to manage the projects then there are probably some serious problems with the projects - asking the developer every day what they're going to work on next and how long it will take, for example, hints that no real planning was done in advance...
as for the client requirements, if they absolutely insist on this kind of minutia [and i know that, for example, some government agencies do] then the best option is to provide a web interface or other application to automate the tedium that will do the roll-up for you. You'll still be wasting the developers' time, but at least you won't be wasting your time ;-)
oh, and to answer your question literally: the perfect status report says "on target with the project plan", and nothing more ;-)

Use Scrum. Create the sprint backlog, have a spreadsheet with the tasks and a column for each day of the sprint. Ask people to fill out the hours worked on each task every day. Send daily report starting with the burndown chart for the sprint and then short two one liners for each member - last worked on and next working on. Send weekly report with the burndown chart, red/yellow/green status for each major feature (and blocking issues and notes if it's not green) and the remaining items on the sprint backlog.
I don't have a link to samples, but here are some drafts:
10/02/2008 - Product A daily status
<Burndown chart>
Team member A
Last 24: feature A
Next 24: feature A unit tests
Team member B
Last 24: bug jail
Next 24: feature B
Team member C
Last 24: feature C
Next 24: feature C
Blocked on: Dependency D - still waiting on the redist from team D
10/02/2008 - Product A weekly status
<Burndown chart>
**Feature A** - Green
[note: red/yellow/green represents status; use background color as well for better visualisation]
On track
**Feature B** - Yellow
[note: red/yellow/green represents status; use background color as well for better visualisation]
Slipping a day due to bug jail
Mitigation: will load balance unit tests on team member A
**Feature C** - Red
[note: red/yellow/green represents status; use background color as well for better visualisation]
Feature is blocked on external dependency from team D. No ETA on unblock.
Mitigation: consider cutting the feature for this sprint
**Milestone schedule:**
Planning complete - 9/15 (two weeks of planning)
Code complete - 10/15 (four weeks of coding)
RC - 10/30 (two weeks stabilization and testing)

Just give them a template laid out in a format that you expect to see the data returned in. You may also consider increasing the time they are going to devote to this and removing the "not thinking too much" clause if you are requiring estimates for future work. I wouldn't trust an estimate that someone came up with in 5 mins. without thinking.
If you are currently using any project management software, it should be trivial for the developers to record and review (or even just remember) what they have done compile it for you. Ideally they would be recording issues or questions throughout the day and not trying to come up with them just to fill in the report.
It seems like your "I want to learn" list is an excellent starting point to generate a template from. Only you will know what the perfect format for you is.

Generally I have just relied on e-mail as a means of providing status reports, it provides the simplicity and speed of completion but does not enforce any sort of uniformity.
There are a number of options to achieve this but they all risk making the process more complex and time consuming. Some of these could be:
An online form with sections for each or a multi sheet spreadsheet, with each sheet being a section.
All of these require some effort by yourself to create them, do you need the uniformity for some purpose? e.g. to automate the summary reports.
An alternative to this would be to use some project management tool which the contractors filled in whilst they were working and that you could report on at any time. I would recommend Thoughtworks Studio Mingle, but it does rely on an agile-like process.

Looks like you want to do Extreme Programming stand up meetings.
http://www.extremeprogramming.org/rules/standupmeeting.html
You can talk to off site team members using the phone with laudspeaker, or some VOIP.

Related

Adding more structure to our development processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work with a small team (4 developers) writing firmware and software for our custom hardware. I'm looking into better ways to organise the team and better define processes.
Our Current Setup
Developers are generally working on 2-3 projects at a time.
We have projects that work in an iterative sort of way, where a developer is in regular contact with the customer and features are slowly added and bugs fixed.
We also have projects with fixed delivery dates, and with long lead times, final hardware might appear only a few weeks before delivery. The fixed projects are usually small changes to an existing product or implementation and the work is somehow intermingled.
We are also moving from consulting to products, so we are occasionally adding features that we think will add value, at our own cost.
The Issues
We have a weekly meeting where proportions of time are allotted to each project. "Customer A wants to test feature X next week", so the required time is allotted. "Customer B is having issues with Y, could developer P drive down and take a look?", etc.
When we're busy, these plans are very loosely followed. Issues arise and lower priority stuff gets pushed back. Sometimes, priorities are not clear to developers so there is friction when priorities appear to change. The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Developers are generally working on 2-3 projects at a time.
Multitasking is incredibly inefficient. Switching the brain from one task to another requires time for the gears to change over.
When we're busy, these plans are very loosely followed.
Then why create plans at all?
Is it all possible to dedicate just one developer to one task / product / customer? So developer P is the only one who talks to customer B? (Certainly the developer would need to document exactly what he's doing in case he gets hit by a bus, but he should be recording issues and roadmaps anyway.)
The next week there will be a realisation that we're getting behind on project Z and we all pull-off some long days.
If there had been only one developer on project Z anyway, he wouldn't have been distracted by customer A's problems.
Don't think in terms of a pool of developers serving a pool of customers, think of one developer for a given customer. (This can make vacation planning a little tougher, but if you're constantly pulling all-nighters, you aren't spending enough time away from the office anyhow.)
I'm told that this is all quite common for a small start-up in our industry, but I'm just looking for ways to limit the number of "pizza in the office" all-nighters.
Aren't we all.
"Customer A wants to test feature X next week", so the required time is allotted.
Allotted by whom?
Do you create your own schedules? If not, the only response to management creating a schedule for you is all-nighters.
Realistic non-all-nighter schedules will bother management. Until you can prove that your customers want a better schedule with fewer all-nighters, there isn't much you can do.
The only way to reduce the all-nighters is to get stuff done sooner. But if the hardware doesn't arrive sooner, there isn't much you can do, is there?
Two thoughts: drive quality and improve estimates.
I work in a small software shop that produces a product. The most significant difference between us and other shops of a similar size I've worked in a is full time QA (now more than one). The value this person should bring on day one is not testing until the tests are written out. We use TestLink. There are several reasons for this approach:
Repeating tests to find regression bugs. You change something, what did it break?
Thinking through how to test functionality ahead of time - this is a cheek-by-jowl activity between developer and QA, and if it doesn't hurt, you're probably doing it wrong.
Having someone else test and validate your code is a Good Idea.
Put some structure around you estimation activity. Reuse a format, be it Excel, MS Project or something else (at least do it digitally). Do so, and you'll start to see patterns repeating around what you do building software. Generally speaking include time in your estimates for thinking about it (a.k.a. design), building, testing (QA), fixing and deployment. Also, read McConnell's book Software Estimation, use anything you think is worthwhile from it, it's a great book.
Poor quality means longer development cycles. Most effective step is QA, short of that unit tests. If it were a web app I'd also suggest something like Selenium, but you're dealing with hardware so not sure what can be done. Improving estimates means the ability to attempt forecasting when things will suck, which may not sound like much, but knowing ahead of time can be cathartic.
I suggest you follow the Scrum Framework. Create a Scrum Environment with an enterprise product. Have product Teams working on the features for their own individual products, which is a part of the combined enterprise product. If you have the resources have a production/issues support and infrastructure Scrum Team. If the issues are coming your way too quickly, have the infrastructure Team try following Kanban or Scrumban.
The Scrum Framework in itself will solve most of your problems if adopted properly.

Calculating Project Programming Times [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.

Have you ever had to file "TPS reports"? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
No, not literally. Rather, do you have any red tape horror stories of policies that affected your ability to produce quality software? I'm not talking about general human resources or systems administration policies like this question, but policies that were directly targeted at the development process, such as bad source control policies, testing procedures, or bug tracking processes.
Please, no holy wars such as indents vs. spaces or bracing styles, but rather examples of loathsome bureaucracy, like the TPS reports of legend.
This is somewhat relevant to me as I've been reviewing my group's development process, and I'd like to see (for context) some of the worst processes that you've had to deal with. When does a structured policy or process go too far?
As a contractor, I've often had to file three separate time and expense reports.
Our official report used for invoicing.
Our project-specific fine-grained report. It has to match the aggregate invoicing report. And, it's available to project managers two weeks before the numbers from the invoices.
Out customer's activity reports. These have to match the aggregate invoicing also. THe customer's accounting folks need this to confirm our invoices. Wait, didn't I create the invoices, also?
Let's not forget the need for two status reports (ours and the customers.)
I quite literally file TPS reports for one of the systems where I work: http://tps.tmccom.com/
And yes, I am very much aware of how outdated and non-standardized the site is.
No, but a few years ago I wrote the bulk of the MLI (Mandatory Liability Insurance) system for the State of Alabama...
Every report that the system generated was a TPS report :)
E.g. The Monthly Transaction TPS Report, The Daily Volume TPS report etc.
It was most amusing when someone from the State would call us up asking about the TPS reports :) I don't think they ever figured out why they were called TPS reports.
For the last several years we had to fill out a leave slip, signed by our first line supervisor, in order to take sick time or vacation.
Recently we were given access to a fancy web-application. It allows workers to request leave and allows supervisors to approve leave. It rolls up into our time sheet and it's the basis of our payroll system.
Despite tremendous success in rolling out the new leave request system, our office manager still required us to submit the paper leave slip, in addition to doing it on-line.
It took months before the office manager realized the new system provided just as much oversight as the manual system.
I currently have to outline my time in three separate utilities:
I enter my time at a high level (consulting time vs. holiday vs. vacation vs. sick, etc) for a period one week, showing hours worked per day on each. This one is for billing the client.
The client has a time tracking system that they just rolled out in which we have to enter our time at the request level. Admin time for client-related things (meetings, training, etc) has it's own general purpose bucket. Non-billable items have another. This one is for a period of one month, showing hours per week.
My company also has a time tracking tool, detailing everything we did in a given week. Time is tracked on the quarter hour, and is extremely fine grained. i.e. "For request 12345, I spent 0.25 hours writing an estimate, 0.50 hours writing a requirements document, 0.50 hours coding file x." Estimates also have to be entered into the system, and effectively locked down (Waterfall FTL!), before we send anything to the client for approval (long before anything is coded).
We also have a very strict peer review process. Anything official that we send to the client (requirements documents, change requests, code, etc) have to be peer reviewed first. The client also has a Change Control Board which meets once a week to approve anything that will be installed into production.
I once explained to some friends from college exactly how much process and paranoia surrounded my work. By the end of it, I'd figured out that the hypothetical situation where (in a non-emergency, non-production support situation), the estimate for adding a single field to an existing report, after all the process was taken into account, was three hours at the ABSOLUTE MINIMUM for what would essentially be adding a single field to an existing select statement (or something similar as we use a tool which doesn't use SQL for DB queries). Additionally, since the estimate for this would be so small (since that three hours represents ONLY the required 0.25 hour minimum for each required item, plus half an hour for the production change control meeting), I'd need to get my team lead to sign off on it first, since I'd be going so far against what our estimating tool says it should take me to change the code (this tool is largely based on LOC).
*sigh*
I think that's enough ranting for today.
At a previous job at a large old computer company we had a CRT process. I wouldn't say it was a completely awful over the top idea since the software product involved high-availability computing and was thus very risk averse. But it was annoying at times and certainly slowed down development.
Basically, the system was, after having your code peer reviewed by 3 people, you filled out a CRT form (which at some point I converted to a web application).
The CRT (Change Request Team) would review all the requests a few times a week and discuss with management, team leads and the coder in question to ensure all the hoops had been jumped through: All the tests written... appropriate people had reviewed it... QA informed of new tests... etc.
Thankfully the web application version was well accepted and the old manual form, which was really detailed and over the top, was dropped. At least from our organization...

Is there such a thing as a process smell? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We're generally familiar with code smells here, but just as damaging if not more so are when the business side of things - as much as it falls within our domain - is going wrong.
As examples, the inverse of anything on the Joel test would be considered a major process smell (i.e. no source control, no testers) but those are obvious ones and the point of "smells" is that they're subtle and build into something destructive. I'm looking for granularity here.
To start off with here's a couple (which can be turned into a list as the answers come in)
Writing code before you have a signed contract with the client
Being asked for on-the-fly estimates ("just a rough one will do") for anything which will take more than a day (a few hours?)
Ancient cargo-cult wisdom prevails (personal example - VisStudio sourcesafe integration is banned)
You've stopped having non-project specific group meetings (or lack any similar forum for discussion)
So what are some other process smells, and just how bad are they?
The book "Antipatterns" by William J. Brown et. al. has a bunch of project-related smells. They aren't always disasters in progress; mitigating circumstances exist for just about any smell.
The Portland Pattern Repository also has a page on Antipatterns, covering many of the same topics as the "Antipatterns" book. Visit http://c2.com/cgi/wiki?AntiPatternsCatalog and scroll down to "Management Antipatterns." A few examples:
Analysis Paralysis - a team of otherwise intelligent and well-meaning analysts enter into a phase of analysis that only ends when the project is cancelled.
Give Me Estimates Now - a client (or PointyHairedBoss) demands estimates before you have enough data to deliver it. You accept the "challenge" and give out of the head estimates (i.e. guesses). The client/boss then treats the estimate as an iron-clad commitment.
Ground Hog Day Project - meetings are held which seem to discuss the same things over and over and over again. At the end of said meetings, decisions are made that "something must be done."
Design By Committee - Given a political environment in which no one person has enough clout to present a design for a system and get it approved, how do you get a design done? Put together a big committee to solve the problem. Let them battle it out amongst themselves and finally take whatever comes out the end.
Collect them all! :-)
Back Dating - being given an end date and then told what needs to get done
Inverse QA Coverage - QA focuses on the non-essential items (because that's all they know how to test)
Environmental Alignment Issues - the various environments (Dev, Test, Staging, Production) are not in sync for code and data - therefore any testing prior to production is invalid
Delivery Date Detachment - no one believes in the end date because: it was made up to begin with and 100% of prior projects never met their delivery dates
Old Grumpy Code - old code is feared because there's no desire to refactor
the evil pm triangle (scope, cost, resources and/or quality) - to adjust the project you need to add people, reduce quality, reduce scope, etc....once a project is in motion, most changes (even reduction in scope) will increase time and cost and reduce quality..once the train tracks are down, it's tough just hanging a left turn
One smell I have a real problem with (because I work with it): Not ditching tools, dev software, methodologies, or anything else that doesn't work.
Many times, there is one (or more than one) piece of software that clearly, blatantly, doesn't work and likely interferes with the development process, but which a project manager simply refuses to replace/upgrade "because it would cost too much {time, money, whatever} to replace."
Edit: This also extends to machines and other infrastructure too (examples: a build server that takes an hour to do a two-minute build, or a version control system - ahem CVS - that takes 15 minutes to find out whether there have been any updates on a 50MB source tree).
Shipping a prototype - "we'll productize it later"
I suggesting checking out the Organizational section of the wikipedia page on Anti-Patterns. The I've had to deal with are 'Crisis mode' and the 'on-the-fly estimates' you mentioned.
You haven't had a post project review....when the project ended 6 months ago.
Some smells I have seen:
Optimistic management, but they can't pay your salary this month. This is real bad. I left the company in time but it died a few months later.
Extreme fanatic team building sessions. Focussing on how great the company is. But in the end it all goes down.
Good new people are laid of because they tried to change the process. Real shame. I have seen some people that really tried to improve the company, but old habits never dies so it often ends in a big desillusion.
The boss is always right mentality...
There is more but I won't spoil the fun for others.
Changes to process are made with no thought to timing or current deliverables, then immediately reversed when deliverables turn up late due to instituting new process.
Someone goes on medical leave and the team as a result is behind trying to pick up that person's work as well as their own and when the managers or clients or client sales reps are told things will be delayed as a result, they are only concerned about when things will happen and can you work nights and weekends in the meantime and never even ask about the person with the emergency and how he or she is doing.
When overtime for low level people is expected but the people who want this urgently leave on time and are not availble to answer questions. Or when they make you work overtime to be ready by 8 am and then don't look at it on QA for three more days. Hello I could have done it by then in regular hours.
Delivery of needed files (for data import for example) or information minutes before the due date and then blaming developers when due date is not met.
What I call: NIH (Process edition), a.k.a. Choose your own adventure.
Evidence of this:
you spend endless meetings debugging the process. And refactoring it.
nothing really gets done, because no one knows what they should be doing.
I guess this is an antipattern, rather than a smell.
Interesting question and even more interesting answers. Thanks for those.
I have been in almost all roles of software development (Developer, QA, Tech Lead, Project Manager - even client) and I can safely list the following smells
How quickly does the team react to new inputs (and how accepting are they of change)
How many layers does it pass through to get things done (beaurucracy)
How clear are the features/tasking - and are the goals SMART and do we have any KPIs.
How serious is the team working on the project about it
Is the team meeting regularly (read daily) to discuss achievements, goals and issues.
Most important, however, and the most evident (to a good nose) is the hygiene level of the project management tool being used (excel sheet, piece of paper, agile tools, email, whatever in whichever methodology you use). That is the first thing I notice while evaluating projects.
Do I know where the project stands at the moment?
Can I tell (Without asking the team) what needs to be done next?
Can I tell what the team is working on right now?
Can I tell when the next release is and if its achieveable?
Can I tell if client is getting regular updates?
Can I tell if client is giving approvals and if his feedback is taken care of in time?
Can I tell just from looking the load distribution of the project on the engineers?
Obviously, all this is well covered if you pick any modern Agile methodology, but depending on the market and kind of work, the mileage may vary. So keeping myself methodology agnostic, this is the bare minimum set of smells that should be rid of.

Agile 40-hour week [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Have you ever worked on a (full-time) project where using Agile methodologies actually allowed you to accomplish a 40-hour work-week? If so, what were the most valuable agile practices?
Yes, I'm on a 40 hour (actually it's 37.5 hours or so, that's what my contract says) on a project that was run with SCRUM from the beginning. That was about 2 years ago and the first time we implemented SCRUM. It's the project with the least amount of overtime for me personally, and it's also a PC game we're developing. I'm not even in "crunch" mode right now even though we're shipping an open beta on Friday.
We have learned a lot since then about SCRUM and agile. The single most valuable lesson from my point of view is: pod sizes must be reasonable ... we started out with pods with 12-20 members, that didn't work out well at all. A maximum of 10 should not be exceeded. It's too easy to agree on "flaky" and "vague" tasks because otherwise the standup & task planning meetings would take too long. So keep the pod size small and the tasks specific and get the product owner or sign-off's together with those who will work on the task.
Also, with a bi-weekly task planning schedule you have to get every Product Owner to agree on the task list and priorities for the current sprint, and new task requests should be issued before that planning meeting or else it will be ignored for the current sprint. This forced us to improve on inter-pod communication.
Scrum and management that is willing to buy into it.
Fair sprint planning. When you negotiate your own sprint you can choose what your team can accomplish rather than have tasks being handed down from above. Having your sprint commitment locked in (management can't change it mid-sprint) gives freedom from the every changing whims of people.
A well maintained, prioritized backlog that is maintained cooperatively by the product owner and upper management is very useful. It forces them to sit down and think about the features they want, when they want them and the costs involved. They will often say they need a feature now, but when they realized they have to give up something else to get what they want their expectations become more realistic.
Time boxing. if you are running into major problems start removing features from the sprint rather than working extra hours.
You need managerial support for you process without it agile is just a word.
Did I mentioned enlightened management?
Not being able to complete the tasks in a 40 hour week could be due to several things.
I see that this could happen in the early sprints of a Scrum project because the team wasn't sure of:
the amount of work they can do in the sprint and might bite off more than they can chew, and
their ability to estimate accurately the amount of points to award to blocks of work, or
the amount of effort required to perform "a point's worth" of work.
They may also be overly optimistic in what they can accomplish in the time alloted.
After that we get into several of the bad smells of Scrum, specifically:
a team isn't allowed to own its own workload, and maybe
management overrides decisions on what should be in a sprint
If any of these cut in then you are:
doing Scrum in name only, and
"up the creek without a paddle."
There's nothing much you can do apart from correct any problems in the first list, but this will only come with experience.
Correcting the two points in the second list will require a major rethink of how the company is strangling, not employing, Scrum best practises.
HTH
regards,
Rob
It may sound tough, but let's be realistic. Use of agile or any other flavour of a software process has nothing to do with a 40 hour week. Normally the amount of weekly work hours is stipulated within the employment contract and developer can use their discretion to put any additional unpaid work.
Please let’s not attribute magic healing powers to whatever your preferred software process. It can provide a different approach to risk management, different planning horizon or better stakeholder involvement; however, unless slavery is still lawful there you live the working day starts when you come through the door and ends when you go home.
It is as much up to a developer to make sure that the contract of employment is not breached as to their management. Your stake is limited by the amount of pay you get and the amount of honest work hours you agreed to give in return, regardless of a methodology used.
Certainly.
For the me the most important things that helped (in order of importance):
Cross-functional team - having programmers, testers, technical writers and sales/services people in the same team and talking to each other daily (daily call) was great.
Regular builds and continuous integration
Frequent reviews/demos to stakeholders and customers. This reduces the risk and time lost to it only for the period of iteration (Sprint).
Daily Call or Stand up meeting
Adding to all of the above (inaccurate estimates, badly implemented Scrum, etc.), the problem could to be the lack of understanding of your team's Velocity, something as simple as "how much work a team can accomplish", but which is not that easy to find as it may seem.
I've worked at several shops that practices various agile methodologies. The most interesting one had 4 "sessions" throughout the day that were about an hour and a half long, with a 20 minute break in between. Friday was Personal Dev day, so the last two sessions were for whatever you wanted to work on.
The key things for us were communication, really nailing down the concept of user stories, defining done to mean "in production", and trust. We also made sure to break the stories down into chunks that were no more than a day long, and ideally 1-2 development sessions. We typically swapped pairs every session to every other session.
Currently I run a 20+ person dev team which is partially distributed. The key tenant for me is sustainable pace - and that means I don't want my teams working > 40 hour weeks even occasionally. Obviously if someone wants to stay late and work on things, that's up to them, but in general I fight hard to make sure that we work within the velocity a 40-hour week gives us.
As both a Scrum Master and personnel manager, I have been a strong advocate of the 40-hour work week. I actively discourage team members from working over 40 hours as productivity drops quickly as the work-life balance shifts. I have found recuperating from an late-night work day often takes longer than the extra hours worked.
When it is well-run, Scrum aids in minimizing the "cram" that often occurs at the end of an iteration by encouraging (requiring?) a consistent pace throughout and tools like velocity and burndowns work well to plan and track progress.

Resources