Policy for fixing broken nightly builds [closed] - continuous-integration

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I guess everybody agrees that having continuous builds and continuous integration is beneficial for quality of the software product. Defects are found early so they can be fixed ASAP. For continuous builds, which take several minutes, it is usually easy to find the one who caused the defect. However, for nightly integration tests, which take long time to run, this may be a challenge. Here are specifics of the situation, for which I'm looking for an optimal solution:
Running integration tests takes more than 1 hour. Therefore they are run overnight. Multiple check-ins happen every day (team of about 15 developers) so it is sometimes difficult to find the "culprit" (if any).
Integration testing environment depends on other environments (web services and databases), which may fail from time to time. This causes integration tests to fail.
So how to organize the team so that these failures are fixed early? In my opinion, there should be someone appointed to DIAGNOSE the defect(s). This should be the first task in the morning. If he needs an expertise of others, they should be readily available. Once the source (component, database, web service) of the failure is determined, the owner should start fixing it (or another team should be notified).
How to appoint the one who diagnoses the defects? Ideally, someone would volunteer (ha ha). This won't happen very often, I'm afraid. I've heard other option - whoever comes first to the office should check the results of the nightly builds. This is OK, if the whole team agrees. However, this rewards those who come late. I suppose that this role should rotate in the team. The excuse "I don't know much about builds" should not be accepted. Diagnostics of the source of the failure should be rather straightforward. If it is not, then adding more of diagnostics logging to the code should improve the visibility into integration test failures.
Any experience in this area or suggestions for improvements of the above approach?

A famous policy about broken nightly builds, attributed to Microsoft, is that the guy whose commit broke the build becomes responsible for maintaining nightly builds until someone else breaks it.
That makes sense, since
everyone makes mistakes, so the necessary rotation will occur (empowered with Least-Recently-Used choice pattern for ambiguous cases)
it encourages people to write better code

What I generally do (I've done it for a team of between 8 and 10 persons) is two have one guy that checks the build, as the first thing he does in the morning -- some would say he is responsible for QA, I suppose.
If there is a problem, he's responsible for finding out what/how -- of course, he can ask help from the other members of the team, if needed.
This means there's at least one member of the team that has to have a great knowledge of the whole application -- but that's not a bad thing anyway : it'll help diagnose problems the day that application is used in production and suffers a failure.
And instead of having one guy to do that, I like when there are two : one for one week, the other for the second week -- for instance ; this way, there are greater chances of always having someone who can diagnose problems, even if one of them is in holidays.
As a sidenote : the more useful things you log during the build, the easier it is to find out what went wrong -- and why.
Why not let everyone in the team check the build every morning ?
Well, not every one wants to, first of all -- and that will be done better if the one doing it likes what he does
And you don't want 10 people spending half an hour every day on that ^^

In your case I'd suggest whoever is in charge of the CM. If that is the manager or technical lead who has too many responsibilities why not give it to a junior developer? I wish someone had forced me early in my career to get to know source control more thoroughly. Not only that but looking at other people's code to track down a source of error is a real skill building or knowledge learning exercise. They say you gain the most from looking at other people's code and I'm a firm believer of this.

Pair experienced with unexperienced
You may want to consider having pairs of developers diagnose the broken builds. I've had good luck with that. Especially if you pair team members who have little familiarity with the build system and team members who have significant familiarity. This may reduce the possibility of team members saying "I don't know much about builds" as a way to try and get out of the duty, and it will decrease your bus number and increase collective ownership.
Give the team a choice of your assigned solution or one of their own making
You could put the issue to your team and ask them to offer a solution. Tell them that if they don't come up with a workable solution, you will make a weekly schedule, assigning one pair per day and making sure that everyone has the opportunity to participate.

Practice continuous integration so you don't need infrequent mega-builds
** you can distribute builds between machines if it's too slow for one machine to do
Use a build status monitor so that whoever checked something in can be made responsible for build failures.
Have an afternoon check-in deadline
Either:
Nobody checks-in after 5pm
or
Nobody checks-in after 5pm unless they're prepared to stay at work until their build passes as green - even if that means working on, committing a fix and waiting for a rebuild.
It's much easier to enforce and obey the first form, so that's what I'd follow.
Members of a former team of mine actually got phoned up and told to return to work to fix the build... and they did.

I'd be tempted to suggest splitting things up in either of a couple of ways:
Time split - Assuming that the tests could run twice a night, why not run the tests against the code at 2 different time points,i.e. all the check-ins up to X p.m. and then the remainder, so that could help narrow down where the problem is.
Team split - Can the code be split into smaller pieces so that the tests could be run on different machines to help narrow down which group should dig into things?
This assumes you could run the tests multiple times and divide things up in such a way so it is a rough idea.

We have a stand-up meeting every morning, before starting work. One of the things on the checklist is the status of the nightly build. Our build system spits out an email after it's run, reporting the status, so this is easy to find out - as it happens, it goes to one guy, but really it should go to everyone, or be posted onto the project wiki.
If the build is broken, then fixing it becomes a top-priority task, to be handled like any other task, which means that we'll decide at the standup who is going to work on it, and then they go and do so. We do pair programming, and will usually treat this as a pair task. If we're short-staffed, we might assign one person to investigate the breakage, and then pair someone with him to fix it a bit later.
We don't have a formal mechanism for assigning the task: we're a small team (six people, usually), and have collective code ownership, so we just work it out between ourselves. If we think one particular pair's checkin broke the build, it would usually be them who fix it. If not, it could be anyone; it's usually decided by seeing who isn't currently in the middle of some other task.

Related

How to effectively measure developer's work hours? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a few software developers working for my projects and I would like to provide them a way to register time they spent on real development.
There is good will to register development hours, no force, but we try to avoid techniques like excel sheets register because this is so uncomfortable.
I can track svn commits, but this is unreliable. Developers also helps supporting different projects during the day, so assuming they work on one project by whole day is not true.
I've seen utilities that popups a message every hour to confirm the project you're working on but this is annoying.
Some kind of active-window-title-anaylzer might help (you can get solution name from there in the case of Visual Studio) but I have no experience with such idea.
If you have any experience with programmers/designers work hours registration, please share with me.
Thanks
Its a good question, and the very best way you can measure hours spent on a development project is not to measure hours spent at all.
You say there is good will to register hours, but I have my doubts. Realistically, from a developers point of view, excessive time management is a distraction for most (perhaps ALL developers on the planet).
I can understand why time is measured so excessively on ODesk. There is a good reason for this, because project time is paid up front by the client to ODesk, and the developer needs to prove to ODesk hours worked. Payment is also guaranteed, and its unlikely oDesk providers and developers ever meet in real life, so there is no trust.
Since its unlikely you're paying your developers upfront, and its more likely you have better trust established, you need to perhaps switch your attention from a management strategy that's cludgy and annoying, to something way more useful.
Yes ladies and gentleman I am talking about Scrum. Throw any notion of keeping hour per hour tabs of your developers out the window (they'll love you for it). And instead introduce Scrum Management into the scenario.
Create some sprints (milestones) for your product development, and in that have a list of iterations (batched deliverable s), try keep your iterations down to a weekly period.
Create a product backlog, and make sure you're aware who exactly is working on what. Find someone on the team to act as a scrum master on your behalf, or to take on this responsibility yourself. Make sure you have daily feedback meetings, keep them short and focused, to identify any risks that might get in the way of deliverable s. Let the developers more or less drive the timing process, and get realistic estimates for tasks.
Read a book or 2 on scrum, and get others and team members involved in the learning curve. Tweak the base scrum methodology to best suit your particular style of management, and I assure you, you will have a very happy team.
Measure your time in man days, and try avoid getting on the back of a developer for hour by hour progress...
There're various time tracking software tools as you have probably already seen from doing google searches. But in all honestly you are asking for the Holy Grail of time tracking. For example do you consider a developer staring at her code and thinking as development time? She might stare at the screen for 1 hour and only type for 10 minutes. In this case it looks like they don't work much when really they worked for 1 hour and 10 minutes. I don't mean to say what you asking for isn't a valid thing to want it just seems to be one of those problems that doesn't have a perfect solution.
Good luck.
I think you're asking the wrong question and are headed for a slippery slope. There's so much that goes into development that has nothing to do with actually coding.
I think a better solution if you're deadset on tracking something is to track the time spent on activities NOT related to development. Of course there's some grey area there too. For example, a meeting to discuss user requirements should probably be counted towards development even though no coding will get done.
you need something like this dashboard to measure time on task. The only way to know the real software developer time is to have then track it. That way when they switch tasks they stop the clock so to speak. I think the hardest thing would be getting the developers to use it as a measure of how long they work on a certain project or even code module, etc. If you can then use those metrics to reduce distractions and other time sucks, you might at least be able to get a decent swag about how much time they spend coding versus email versus talking to other developers, etc.
If you're trying to measure what the developer has as their active window, you have to assume goodwill, because any decent developer can sneak around that if you're trying to turn the screws on them. I spend about a third of my "development time" in Firefox looking at references, for example.
Maybe ask the developers just to keep a log, so you know where their time is going? Whereas that's not ideal, you're never going to do much better than that.
If you are trying to measure the time spent on distractions and disturbances and other task, would it not be in your developers interest to give you this information willingly?
You said somewhere that you are implementing scrum.
If you really have to either take it up at the daily scrum, making it a part of the ritual, or add a very short daily meeting at the end of the day. Let the developers guesstimate how much of the day was spent on distractions and disturbances and other tasks. To me that feels like it will be as close to "correct" as any other way of measuring, considering the difficulties involved.
So, instead of having the developers note down the time, make it the scrummaster's job to sum it up, and make this as painless as possible for the devs. Make sure that the developers gain something tangible from doing it as well, otherwise it is going to end up on the impediment list awfully quickly.
As Dean J implied, you have to trust the developers anyway.
it depends on your IDE - if you are using Eclipse then I recommend using Mylyn plugin. You can measure the time spent on each task. Tasks can be fetched from every famous task repositories i.e. Tuleap. details here
User only need to put the task in Active mode - and deactivate when task is finished to able to stop the timer. I think Mylyn will support such process - if a status of a task changes then this would trigger the active mode (if closed then deactivate the task)
Sometimes development involve using a browser, or a terminal. Eclipse can be used as a browser and as a terminal as well - so developer does not need to leave Eclipse - so almost every activity can be measured related to a task.
Try DotProject or XPlanner
An active window analyzer won't give you reliable results, because your developers will swap between programs (Outlook, File Explorer, version control, internet browser, etc.). Your proposed analyzer won't log that time, although it will very likely be part of the development time that the developer puts into the project (software development is not just coding in VS all the time).
Trying to measure a developer's work hours is the wrong notion. A proper question to ask is what is a programmer's effectiveness. That can't be measured by coding time, time spent sitting in front of a computer, or the like.
As Joel Spolsky put it well in a blog on software craftsmanship, software development "...is not a manufacturing process."
A related but somewhat different discussion appears in this SO article on an invasive programmer productivity measurement tool.
I definitely would not recommend you to use any time measuring software where the developer is forced. It is a massive distraction to developers' concentration.
Instead, the following simple techniques can be used:
Spreadsheet: For smaller projects or team of developers
There's probably nothing easier than to create and share a spreadsheet online, add project tasks to it, assign tasks to a developer, let the developers estimate hours for theirs tasks, let them update their task status(es) (very rough value between 0% and 100%) as they want, let them specify time (hours) it really took to complete the task.
So in the spreadsheet you may have columns:
task name, assigned to, estimated hours, real hours, done (%)
Google Drive spreadsheet may be the answer.
This is a very simple and fast method which distracts developer(s) minimally.
Scrum: For medium to large projects or team of developers
Tasks and the Scrum information are recorded and kept on a board in the office and/or a special Scrum application can be used. A good web Scrum application is Pivotal Tracker which I would recommend for any size of project or team.
More about Scrum:
http://en.wikipedia.org/wiki/Scrum_(development)
In both cases, the product owners (clients or those who deal with clients), project managers and all developers can better and faster:
communicate
estimate
see progress of the team and all individuals involved

Project Termination [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was recently working with a team to develop an online system. We had worked for several months and were making good progress when the project got canned. We all felt strongly that the projects completion was important and that it would have great outcomes on our consumers productivity. After being frustrated for a while I thought I should ask some people with more experience.
What is the best way to deal with the frustration of a canned project and move forward so that it doesn't hold future possibilities back?
On a well designed project, some of the code you developed can be reused in future projects, making it worthwhile. Even if you can't use any of it however, you and your team probably gained valuable experience that will help in the future as well. Think of it as an expensive team exercise.
Don't put your heart and soul into someone else's project?
I do a lot of work for different people and while some projects are more interesting than others they're not my projects so I wouldn't be too broken up if they got canned. I've got my own stuff I'm working on. No one can terminate those projects but me.
Grieve. Such a loss will produce a grief reaction. Not one as strong as though you had lost a loved one, but it's a grief reaction nonetheless, complete with all those stages of grief.
Failure is the best (and sometimes only) way to learn new things, even if the failure is not your fault. There are many different angles by which you can salvage useful information from this:
Code that is reusable
New technologies or skills garnered from the project
Lessons about project management based on how the failure was handled (maybe the project should have been canceled much sooner, before the team bought into it)
Non-technical ideas that you can reuse in other projects for the company or even in your own endeavors.
I highly recommend doing a postmortem, but don't dwell. Most projects get canned at some point in their cycle and if you let it affect your morale, it becomes a downward spiral from which it's hard to recover. You may become oversensitive to even slight requirements changes.
Attack every project as though it were your own. By this I don't mean invest all of your emotions (as stated here already by Spencer Ruport). But write all your code and organize all your code in a manner that you can easily pull out tools that you might need in the future. You never know if you will need it...but odds are you will. If you write an account manager app...do it in a modular reuseable fashion. If you write an image uploader...write it in a way that it can be ported to any other project you have. Write helper functions around all of your major features to make it more user friendly down the road.
This of course requires some planning prior to losing the gig! No worries. It rarely is because of you that you (the whole team) loses the gig. Some financial decision or business decision is usually at play. In this case most likely the economy is what killed you. In the case that you don't have any physical benefits to the failed project...look at it as a learning experience. Inevitably...no matter how good you are...you probably had something that you did that you don't or no longer agree with. Learn from that. You most likely also did something very cool that you loved. BLOG ABOUT IT! This serves two purposes..you just created something tangible from the project...and you put it somewhere that you won't forget about it.
Sucks all the way around. But at least there is a great market out there right now! Contact me directly if you want my headhunter list (80 technical recruiters in CA and the US).
Two things:
Your Investment in the Project & Code: The fact your team had such strong feelings for the project & were so frustrated on it being canned is a good sign - it means you are a true developer/programmer and are not just doing a half-job for full-pay. So to deal with the project being canned: know you & your team are committed to your work & while that project may not have panned out, you guys sound like a real credit to that project & any other you may work on. It sounds like you just need to find a project/opportunity that has the legs.
My Experience: Projects get canned for all sorts of reasons - budget, lack of confidence from stakeholders, too late to market, changed scope etc. I would enquiry/investigate why your project was canned. If it is budget or lack of stakeholder confidence then it is really good news. It means an opportunity has just presented itself to you & your team. Consider pursuing it!
Either way your team will have grown from the experience: both technically & from a business perspective.
cash the paycheck - that always helps ;-)
ask if you can have the rights to the canned project, since they don't want it, then open-source or commercialize it yourself if you think it's worthy
it's good to care about your work; it's not so good to obsess over it.
there will be other projects even better than that one in the future; they might also get canned, for any number of reasons both rational and irrational
Good example: I once worked with a lady who spent 2 years on a document-imaging project that was canned a few days before it was supposed to go live; it was canned because the new manager did not like the old manager, and the project was his "pet". This lady's reaction: "I'm looking forward to learning something new!"
This can be used to bring your team closer together, if you have the right sort of people. There is nothing quite like working hard on something you believe in and then having it canned. It can depress, but it can also motivate people to want to prove next time that they can do the job, that they had the right idea.
It helps to galvanize the team; we were there, we worked hard, and it was taken from us.
Of course, it's better not to be in that situation to start with, but when you find yourself there use it to build the team.
Sunk cost cannot be used as a reason for the continuance of a project. If the leaders have made a business decision then I'm sure that it is well motivated, however upsetting.
I'd console yourself in that big swings should be celebrated in business, big companies do not win every bid and complete every project they start. So console yourself in having lost once, maybe you might be able to change the way things were done, or focus more on the project stakeholders as well to make sure they understand why your project is worth completing compared to the other projects and business initiatives at the company.
I'll finish with my favourite saying:
"Good judgement comes from experience. Experience comes from bad judgement."
Learn from it!
Watch a Rocky movie (the last one was good) and have a few beers. There's no way not to put yourself into a project, there's no way to not feel bad about a project being terminated or failing, there's no way not to feel negative about the company. What makes a good programmer better is taking all the emotions, anger, etc. and being able to release it and move on with the same focus and dedication that was there with the first project. All part of life and all part of working in IT.

Skunk Works Projects [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Skunk Works Project: A project carried out by one part of a company without the knowledge of the remainder of the company.
Looking for stories about any skunk works projects you've worked on or initiated:
Was it successful?
Were you found out?
Were you punished or rewarded?
How did you fund it?
How did you staff it?
How long did it take to finish, compared to above-ground projects?
What was the cost, compared to above-ground projects?
Was it formally adopted?
Excellent question. Very important question.
Geoffrey Moore (Inside the Tornado, Crossing the Chasm, etc.) has written that, as he lectured around the world, he had one question he would ask every client (including those like GE, Motorola, etc.)
The question was:
"Can you give me even one example of a truly ground-breaking, paradigm shifting innovation that has ever come out of your company's research or product development process?"
At least at the point where I read the quote, not one example had been identified. And in most cases, such products or services had been conceived, designed, and largely developed by small groups of people who at best were ignored, but were often actively opposed by R&D.
Not sure if this qualifies as a "skunk works" project, but here's a great story from This American Life. It's Act Two of this March 2005 episode.
Amy O'Leary tells the story of a
software writer at Apple Computer
whose job contract ends, but he
refuses to go away. He continues to
show up at work every day, sneaking in
the front door, hiding out in empty
offices, and putting in long hours on
a project the company cancelled. There
were no meetings, no office politics,
no managers interfering with his work.
Soon, he had written a perfect piece
of software. His final problem is
figuring out how to secretly install
it in Apple's new computers without
anyone noticing. (12 minutes)
Great listening for anyone, but especially programmers in this case.
I have actually done a number of these "Secret" project type situations. Were they are not fully supported when started, and kept very secret. I'll discuss on of these along the lines of your questions.
Was it successful?
Yes, the system developed was put into place 3 years ago, and has been functioning ever since.
Were you found out?
Yes, it was discovered, and it was part of the overall plan.
Were you punished or rewarded?
With a working prototype we were rewarded, given the extra resources needed, and eventually the system was put into place for the entire company to use.
How did you fund it?
It was a development activity that was simply completed in down-time and personal time by various people.
How did you staff it?
See the above.
How long did it take to finish, compared to above-ground projects?
We did the entire thing in about 4 months, with dedicated resources a single person could have done it in about 2 months, or a team in about 3-4 weeks.
What was the cost compared to above-ground projects?
No cost, using down-time that was already "wasted" to be put to effective use. All existing infrastructure for the final incarnation was already there.
Was it formally adopted?
Yes, it is a solid part of the business plan now, and has been for over 3 years.
We are currently in this situation, although, admittedly, the project will not have high visibility aspects - even though everyone will be eventually using it.
As part of a preparation to rebuild most of our enterprise applications, we have started developing an application framework that will be the basis for all of the replacement applications. We already have "bench projects" and "proof of concept" time that everyone is aware that we use to evaluate concepts. How it is different this time, though, is that we are actively developing a full project.
Was it successful? - We have not rolled out the full framework yet, but since it is modular, have been rolling out pieces in the legacy applications. Most of these are focused on stability and reporting/logging concerns. So far, they have exceeded expectations, allowing us to react to issues more rapidly, as well as eliminating some previously recurring ones.
Were you found out? - Well, this project has become one of the worst kept secrets I have ever seen. While there are quite a few people who have heard the name of the project thrown around a bit, I don't think anyone outside of a few of my developers and the testing team really know what it is about.
Were you punished or rewarded? - We haven't considered either side of this, yet. Unless the framework would cause negative effects, I doubt we would be punished for it. However, even if it is a success, the reward will be that no one notices anything other than improved applications.
How did you fund it? - Like mentioned before, bench time between other projects and inclusion in "proof of concept" work. I have also been putting some of my own personal time into it on my commute, since it will lay the groundwork for how all of my developers interact with the applications in the future.
How did you staff it? - I started with a series of small proof of concepts within the legacy codebase as part of "maintaining" the applications. Going in and fixing a defect often involved analytical steps on what could be done to prevent things from happening or improve the experience in the future. These were eventually extracted and refactored in their own assemblies, which became the beginning of the framework. We are now placing "covert" projects into our iterations that help flesh out these ideas through my developers, and we are now extracting and refactoring their efforts based upon the success of the implementation.
How long did it take to finish, compared to above-ground projects? - Yet to be determined. Since this is not an official project, so far it has really cost nothing. Bench time and "proof of concept" work is standard inclusion. The fact that we are essentially creating something from this time instead of throwing it away is gravy.
What was the cost, compared to above-ground projects? - Once again, yet to be determined. I imagine that the up-front cost will be relatively small compared to larger projects. Considering that this is a framework to contain commonly used extensions and improve the ability and quality of the developer to work efficiently, it will probably pay for itself before it is finished due to time-saving, improved practices, and reduction in defects.
Was it formally adopted? - The developers have embraced the concept. My immediate management is chomping at the bit. My management peers are excited, if not a little confused on what it will do. The measurement will be the success of the applications that are built off of the framework - which is some ways away still.
I built a tool to validate schema changes to the target DB at work. prior to my tool we did it all by hand with fugly scripts that DBA's at client sites had to run. my tool started tracking the structure of the database to know if certain things would work out. I got frustrated with having to hand check all this stuff or suffer from the errors inevtiable in doing things by hand so I built my validator and here is its story...
Was it successful?
Yes
Were you found out?
yes. Part of the aspect of a skunk works project is that it has to surface eventually.
Were you punished or rewarded?
Punished initially - why not work on mainstream activities. But rewarded once the benefit was made evident and product errors were reduced. Then it was heralded- everyone loves a winner.
How did you fund it?
For the love of coding it up and making my life easier - so no direct funds needed. Unless was part of managements plan to have a secret project i cannot see how this would be otherwise.
How did you staff it?
I coded alone as a lone developer on a grassy knoll with my laptop.
How long did it take to finish, compared to above-ground projects?
Not comparable. my skunksworks effort was maybe a year of tinkering. If we had set out to do it directly i cant imagine it would have taken less than 2 months directly but I do not know since thats not how it morphed. Downtime to think and plan may have made it faster in the end compared direct planning upfront.
What was the cost, compared to above-ground projects?
Undetermined - As I mentioned, given that I had down time to think and plan it was able to evovle in the direction I wanted without schedule/result pressures. In a shorter or more resource involved project we probably would have made some mistakes in rushin to get to some M1, M2 etc. Besides if it didnt work out, its would have been as if it had never happened as I could have folded up the tents and gone quietly into the night.
Was it formally adopted?
My project is a key part of the product build at my work so I would say its entrenched.
Hmm... I did one of these today actually.
We've got no real backup system in place. At present I get the highly enjoyable task of backing up 100GB of SVN repositories using svn hotcopy and .tar.gz files, while trying to juggle them across two or three NFS shares with limited disk space to get to the server with the backup disk. That's in the best case - i.e. when I can be bothered to babysit the process for 2 hours.
Since that's bound to end in catastrophe sooner or later I did a git svn clone on the largest one straight onto the backup server, then cloned that to my own machine and kicked out the svn working copy I was using. I've gained about 1GB of free space on my machine, given the most important backups some redundancy, and reduced a 15 minute svn st to a 30 second git status. And will I get complained at for it? Probably...
Generally the answers here have been success stories, so I thought I'd share my recent experience sitting just outside such a project that did not go so well.
How did you fund it?
How did you staff it?
The project started when my manager identified a potential employee, lets call him Fred, who had a pet project in our field. We don't pay well, and they they agreed that Fred would be hired and would work almost full time on the project, which they would eventually introduce to the business.
So Fred's started work on the project, known only to Fred's team but not to management or other parts of the business. Fred is a developer, and the work was more-or-less pure development, plus contributions to an underlying open-source project.
Was it successful?
Not really. Fred was working on it alone, and I think would have spent 12-18 months on it. Progress reports to the team consisted of describing whatever bug he was fixing that week. Occasional attempts were made to interest one or two higher-ups in the organization, but they never really went anywhere. Fred was supposed to put together a plan to finish and roll out the project so it could be introduced to the organization, but there always seemed to be some reason it was never done.
Were you found out?
Word slowly filtered out as Fred an the manager tried to interest more people in what they were doing.
Eventually we got restructured, and our new director wanted to know what everyone was working on, and the project was revealed to him. However, it was apparently not explained very well, since the new director wound up asking me (and others in our team I am sure) what exactly Fred's project was?
Were you punished or rewarded?
Eventually the new director froze all funding for the project and Fred was reassigned to work on other projects. That's the current status as far as I know.
How long did it take to finish, compared to above-ground projects?
Was it formally adopted?
It was not finished and it was not adopted.
What was the cost, compared to above-ground projects?
The ostensible cost was Fred's time.
However, there were other costs.
First, Fred and his project became a it of a joke in our team, and later in the teams we work with. What was he doing? Why was he doing it? Why was there no progress? Fred's reputation suffered. "Fred's project" became an in-joke for a project that was going nowhere.
Second, the eventual revelation of such a long-running but hidden project reflected poorly on our manager, and by extension on our whole team.
Third, resentment grew. Why was this guy working on his pet project when there was so much real work to be done? We are a small but busy team and we could have used a developer on any number of other projects.
In the end, I think this project has had consequences for our team's standing and dynamic. I occasionally talk it over with team members, when we're away from the office. Initially (and at the time) we were very critical of Fred, who can be an irritating guy, and who does not take criticism well, and who promised something he couldn't deliver. More recently, we've been critical of our boss. This was not a good way to run a project, and it was obvious from very early on that Fred did not have the skillset to do this work on his own and he would not seek or take advice. It was unfair to Fred that he was put in that position and left in it for so long. Lately I have wondered if I should have raised my concerns more forcefully. Though we did push Fred and our manager on what the project was and where it was going, we did not take it any further than our team. Having said that, I cannot imagine a good outcome even if we had.
Finally, I'd like to say that Fred is a smart guy and the project was not a bad one. It could have been successful (some parts have since come out in competing projects -- inferior competitors that actually delivered).
If this project had been done above board, and Fred had been working with a decent project manager and had a good communicator on the team, it could well have found a champion and delivered something great. Either that or it could have been killed a lot sooner.
I did one of these. It's actually how I ended up programming.
I was responsible for maintaining a legacy, er... "database". I won't go into gory details but it was the usual evil application. The company pretty much ran on it, it would sometimes go down for days. At the time the IS director (a friend) was actively looking for replacements, talking to large consulting organizations, etc.etc. but management was committted/emotionally invested in the existing system. I volunteered (to the IS director) to try to rewrite is (well, more like he asked if anyone was interested in trying to deal with this mess and I volunteered because I was bored). We had no real programmers on staff, and I'd only written a few small ad-hoc things. I had no idea how little I knew.
Finished the thing in about 8 months or maybe a year (this was a while ago, don't remember exactly).
Was it successful?
yes, worked as advertised.
Were you found out?
It initially started as a sort of super-secret, cloak and dagger thing. Kind of silly in retrospect, but it made it more fun. About halfway through it just started to become more obvious that that was what I was doing, and as it turned out the idea was supported. Writing this thing eventually became my job.
Were you punished or rewarded?
Rewarded
How did you fund it? / How did you staff it?
The success of it was pretty much due to the support of my boss, who made sure I had the time and resources I needed to do it.
Was it formally adopted?
Yes, we eventually ran the company on it.

Where does "Change Management" end and "Project Failure" begin? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
What is change management, and how does it apply to a project?
Where does "change management" end, and "project failure" begin?
#shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
I think, most of the time, we developers forget this we all do is, after all, about bussiness.
From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay.
Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all.
Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost?
The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish)
Unless the goals were clearly stated in the beginning of the project, there are no clear lines between "success" and "failure." Often, a project would have varying degree of success/failure.
For some, just getting some concepts in code would be a success, while other may measure success as recovering all investments and making profit.
Two well-known modes of failures are schedule slip and quality deterioration, but in real-world, people do not seem to care much about them.
Simple ways to slip the schedule are to let the managers make request whenever they want (features creep) and let the programmers code whatever they feel is right (cowboy coding). Change management process such as sprint planning of scrum and planning game of XP are some of the examples. Theses are some of the attempts for the management and the developers to ship reliable products on time. If either party is not interested in reliable or on-time, then change management would not be useful.
I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way.
Andy Rutledge has written a pretty interesting article on success. Though the title is Pre-bid Discussions, the article defines having a successful project, which for Andy entails:
Will I or my team be allowed to bring our best work to the final result?
Is the client prepared to engage in the project appropriately?
Is the client prepared to begin this project?
Is the client prepared to invest trust in my or my team’s ideas?
Am I or is my team prepared to fulfill or exceed the project requirements?
This article was pointed out by Obie Fernandez, a successful consultant, in his Do the Hustle conference about consulting.
What is change management, and how does it apply to a project?
Change management is about approving and communicating changes to a project before they happen. If someone on your project (user, sponsor, team member.. whoever) wants to add a feature, the change needs to be documented and analysed for the effect. Any resulting changes to scope, budget and schedule must then be approved before the change is undertaken. These changes are typically approved by your sponsor, your steering committee or your client.
Once the changes have been approved and accepted that is your new plan. It doesn't matter what the original budget or schedule was.
Change Management on projects is all about the principle of "No Surprises". The right people (your Change Control Board) need to approve any changes to Scope, Schedule and Budget before they are acted upon.
One thing to remember is that there may be certain explicit or implicit constraints and tolerances for change. You may be have to deliver your project by a certain date to meet government regulatory requirements. Or your organisation may have a threshold that once a project budget is 30% over the original budget it must go to a "C" level or the project is killed. Investigating and explicitly stating these thresholds and tolerances up front are a good way of having better successful projects.
Where does "change management" end, and "project failure" begin?
If a project delivers on the approved scope, schedule and budget then it is successful.
However it may be still viewed as a failure. Post Implementation Reviews are a good tool to qualify this with your stakeholders (not just your boss). Also Benefit Realisation would be worthwhile looking into to see outside the blackbox of the project and the impact on the business as a whole.

Which Agile software development methods have you had the most success with? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are numerous Agile software development methods. Which ones have you used in practice to deliver a successful project, and how did the method contribute to that success?
I've been involved with quite a few organisations which claimed to work in an 'agile' way, and their processed usually seemed to be base on XP (extreme programming), but none of them ever followed anywhere near all the practices.
That said, I can probably comment on a few of the XP practices
Unit testing seems to prove very useful if it's done from the start of a project, but it seems very difficult to come into an existing code-base and start trying to add unit tests. If you get the opportunity to start from scratch, test driven development is a real help.
Continuous integration seems to be a really good thing (or rather, the lack of it is really bad). That said, the organisations I've seen have usually been so small as to make any other approach seem foolish.
User story cards are nice in that it's great to have a physical object to throw around for prioritisation, but they're not nearly detailed enough unless your developer really knows the domain, or you've got an onsite customer (which I've never actually seen).
Standup meetings tend to be really useful for new team members to get to know everyone, and what they work on. The old hands very quickly slack off, and just say things like 'I'm still working on X', which they've been doing for the past week - It takes a strong leader to force them to delve into details.
Refactoring is now a really misused term, but when you've got sufficient unit tests, it's really useful to conceptually separate the activity of 'changing the design of the existing code without changing the functionality' from 'adding new functionality'
Scrum because it shows where the slackers are. It also identifies much faster that the business unit usually doesn't have a clue what they really want delivered
Scrum.
The daily standup meeting is a great way to make sure things stay on track and progress is being made. I also think it's key to get the product/market folks involved in the process in a real, meaningful way. It'll create a more collaborative environment and removes a lot of the adversarial garbage that comes up when the product team and the dev teams are separate "silos".
Having regular retrospectives is a great way to help a team become more effective/agile.
More than adhering to a specific flavor of Agile this practice can help a team identify what is working well and adapt to a changing environment.
Just make sure the person running the retrospective knows what he/she is doing otherwise it can degenerate into a complaining session.
There are a number of exercises you can take a team through to help them reflect and extract value from the retrospective. I suggest listening to the interview with Linda Rising on Software Engineering Radio for a good introduction.
Do a Google search for "Heartbeat retrospectives" for more information.
I've been working with a team using XP and Scrum practices sprinkled with some lean. It's been very productive.
Daily Standup- helps us keep complete track of what and where everyone is working on.
Pair Programming- has improved our code base and helped remove "silly" bugs being introduced into the system.
iterative development- using 1 week iterations has helped up improve our velocity by setting more direct goals which has also helped us size requirements
TDD- has helped me change my way of programming, now I don't write any code that doesn't fix a broken test and I don't write any test that doesn't have a clearly defined requirement. We've also been using executable requirements which has really helped devs and BAs reach requirements understandings.
kanban boards- show in real time where we are. We have one for the Milestone as well as the current iteration. At a glance you can see what is left to do and what's being done and what's done and accepted. If you don't report in your daily standup something pertaining to what's on the board you have explaining to do.
co-located team- everyone is up to speed and on page with what everyone else is doing. communication is just-in-time, very productive, I don't miss my cube at all.

Resources