Currently, our organization does not practice Continuous Integration.
In order for us to get an CI server up and running, I will need to produce a document demonstrating the return on the investment.
Aside from cost savings by finding and fixing bugs early, I'm curious about other benefits/savings that I could stick into this document.
My #1 reason for liking CI is that it helps prevent developers from checking in broken code which can sometimes cripple an entire team. Imagine if I make a significant check-in involving some db schema changes right before I leave for vacation. Sure, everything works fine on my dev box, but I forget to check-in the db schema changescript which may or may not be trivial. Well, now there are complex changes referring to new/changed fields in the database but nobody who is in the office the next day actually has that new schema, so now the entire team is down while somebody looks into reproducing the work you already did and just forgot to check in.
And yes, I used a particularly nasty example with db changes but it could be anything, really. Perhaps a partial check-in with some emailing code that then causes all of your devs to spam your actual end-users? You name it...
So in my opinion, avoiding a single one of these situations will make the ROI of such an endeavor pay off VERY quickly.
If you're talking to a standard program manager, they may find continuous integration to be a little hard to understand in terms of simple ROI: it's not immediately obvious what physical product that they'll get in exchange for a given dollar investment.
Here's how I've learned to explain it: "Continuous Integration eliminates whole classes of risk from your project."
Risk management is a real problem for program managers that is outside the normal ken of software engineering types who spend more time writing code than worrying about how the dollars get spent. Part of working with these sorts of people effectively is learning to express what we know to be a good thing in terms that they can understand.
Here are some of the risks that I trot out in conversations like these. Note, with sensible program managers, I've already won the argument after the first point:
Integration risk: in a continuous integration-based build system, integration issues like "he forgot to check in a file before he went home for a long weekend" are much less likely to cause an entire development team to lose a whole Friday's worth of work. Savings to the project avoiding one such incident = number of people on the team (minus one due to the villain who forgot to check in) * 8 hours per work day * hourly rate per engineer. Around here, that amounts to thousands of dollars that won't be charged to the project. ROI Win!
Risk of regression: with a unit test / automatic test suite that runs after every build, you reduce the risk that a change to the code breaks something that use to work. This is much more vague and less assured. However, you are at least providing a framework wherein some of the most boring and time consuming (i.e., expensive) human testing is replaced with automation.
Technology risk: continuous integration also gives you an opportunity to try new technology components. For example, we recently found that Java 1.6 update 18 was crashing in the garbage collection algorithm during a deployment to a remote site. Due to continuous integration, we had high confidence that backing down to update 17 had a high likelihood of working where update 18 did not. This sort of thing is much harder to phrase in terms of a cash value but you can still use the risk argument: certain failure of the project = bad. Graceful downgrade = much better.
CI assists with issue discovery. Measure the amount of time currently that it takes to discover broken builds or major bugs in the code. Multiply that by the cost to the company for each developer using that code during that time frame. Multiply that by the number of times breakages occur during the year.
There's your number.
Every successful build is a release candidate - so you can deliver updates and bug fixes much faster.
If part of your build process generates an installer, this allows a fast deployment cycle as well.
From Wikipedia:
when unit tests fail or a bug emerges, developers might revert the codebase back to a bug-free state, without wasting time debugging
developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible
versions).
early warning of broken/incompatible code
early warning of conflicting changes
immediate unit testing of all changes
constant availability of a "current" build for testing, demo, or release purposes
immediate feedback to developers on the quality, functionality, or system-wide impact
of code they are writing
frequent code check-in pushes developers to create modular, less
complex code
metrics generated from automated testing and CI (such as metrics for code coverage, code
complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
well-developed test-suite required for best utility
We use CI (Two builds a day) and it saves us a lot of time keeping working code available for test and release.
From a developer point of view CI can be intimidating when Automatic Build Result, sent by email to all the world (developers, project managers, etc. etc.), says:
"Error in loading DLL Build of 'XYZ.dll' failed." and you are Mr. XYZ and they know who you are :)!
Here's my example from my own experiences...
Our system has multiple platforms and configurations with over 70 engineers working on the same code base. We suffered from somewhere around 60% build success for the less commonly used configs and 85% for the most commonly used. There was a constant flood of e-mails on a daily basis about compile errors or other failures.
I did some rough calculations and estimated that we lost an average of an hour a day per programmer to bad builds, which totals nearly 10 man days of work every day. That doesn't factor in the costs that occur in iteration time when programmers refuse to sync to the latest code because they don't know if it's stable, that costs us even more.
After deploying a rack of build servers managed by Team City we now see an average success rate of 98% on all configs, the average compile error stays in the system for minutes not hours and most of our engineers are now comfortable staying at the latest revision of the code.
In general I would say that a conservative estimate on our overall savings was around 6 man months of time over the last three months of the project compared with the three months prior to deploying CI. This argument has helped us secure resources to expand our build servers and focus more engineer time on additional automated testing.
Our biggest gain, is from always having a nightly build for QA. Under our old system each product, at least once a week, would find out at 2AM that someone had checked in bad code. This caused no nightly build for QA to test with, the remedy was to send release engineering an email. They would diagnose the problem and contact a dev. Sometimes it took as long as noon before QA actually had something to work with. Now, in addition to having a good installer every single night, we actually install it on VM's of all the different supported configurations everynight. So now when QA comes in, they can start testing within a few minutes. Now when you think of the old way, QA came in grabbed the installer, fired up all the vms, installed it, then started testing. We save QA probably 15 minutes per configuration to test on, per QA person.
There are free CI servers available, and free build tools like NAnt. You can implement it on your dev box to discover the benefits.
If you're using source control, and a bug-tracking system, I imagine that consistently being the first to report bugs (within minutes after every check-in) will be pretty compelling. Add to that the decrease in your own bug-rate, and you'll probably have a sale.
The ROI is really an ability to provide what the customer wants. This is of course very subjective but when implemented with involvement of the end customer, you would see that customers starts appreciating what they are getting and hence you tend to see less issues during User Acceptance.
Would it save cost? may be not,
would the project fail during UAT? definitely NO,
would the project be closed in between? - high possibility when the customers find that this would not deliver the
expected result.
would you get real-time and real data about the project - YES
So it helps in failing faster, which helps mitigate risks earlier.
Related
We know this is good to have, but I find myself justifying it to my employer. Please pitch in on why a development team needs a build server.
There are multiple reasons to use build servers. In no particular order and off the top of my head:
You simplify the developers' workflow and reduce the chance of mistakes. Your build server can take care of multiple steps such as checking out latest code, having required software installed, etc. There's no chance of a developer having some stray DLLs on their machine that can cause the build to pass or fail seemingly at random.
Your build server can replicate your target environment (operating system, etc.) and there's less of a chance of something working on developers' desktops and breaking in production.
While it's a good practice for developers to test everything they check in, sometimes they just don't. Then it's good to have the build server there to catch test errors and let the team know the product is broken.
Centralized builds provide easy access to code metrics -- which tests passed, which failed, how often, how well is your code covered by your tests, etc. Having a solid understanding of the quality state of the codebase reduces maintenance and testing costs by providing timely feedback that allows errors to be fixed quickly and easily.
Product deployment is simplified -- the developer or QA doesn't have to remember multiple manual steps. It can be easily automated.
The link between developers and QA is simplified. QA personnel can go to a known location to grab latest, propertly versioned builds.
It's easy to set up builds for release branches, providing an extra safety net for products in their release stage, when making code changes must be done with extra care.
To avoid the "but it works on my box" issue.
Have a consistent, known environment where the software is built to avoid dependencies on local dev boxes.
You can use a virtual server to avoid (much) extra cost if you need to.
ASAP knowledge on what unit tests are currently working and which do not; furthermore, you'll also know if a once passing unit tests starts to fail.
This should sum up why it is critical to have a build server:
http://www.codinghorror.com/blog/2006/10/the-build-server-your-projects-heart-monitor.html
It's a continuous quality test dashboard; it shows you statistics about how the quality of your software is doing, and it shows them to you now. (JUnit, Cobertura)
It makes sure developers aren't hamstrung by other developers breaking the build, and encourages developers to write better code. (FindBugs, PMD)
It saves you time and money throughout the year by getting better code from developers the first time - less money on testing and retesting - and by getting more code from the same developers, because they're less likely to trip each other up.
Two main reasons that non technical people can relate to:
It improves the productivity of the dev team because problems are identified earlier.
It makes the state of the project very obvious. I've shown my management the build status dashboard an now they look at it all the time.
One more thing. Something like Hudson is very simple to set up - you might want to simply run it somewhere in a corner for a while and then show it later.
This is my principal argument:
all official releases must be build in a controlled environment. No exception.
simply because you never know how the developers create their personal releases.
You also don't need to talk about build server as in "blade that costs an arm a a leg". The first build server I set up was a desktop machine that sat unplugged in a corner. It served us very well for more than 3 years.
One you have your build machine, you can start adding some features (Hudson is great) and implement everything that the other posters mentioned.
Once your build machine becomes indispensable to your organization (and everyone sees its benefits), you will be able to ask for a shiny new blade if you wish :-)
The simplest thing you can do to convince your your employer to have a build server is to tell them that they will be able to release faster and with better quality.
Faster releases come from the immediate feedback about quality of the build. If someone breaks the build, he or she can fix the broken build immediately thus avoiding a delay in the build and release schedule. Without a build server the team will have to spend time trying to find what and when happened and how to fix it.
Better quality is achieved by the build server running bug detection tools automatically every time someone check is changes into a version control system. You don't mention what is the main development language in your organization, but such tools, advanced but commercial and simple but free, exist practically for all languages. Lint, FxCop, FindBugs and PMD come to mind.
You may also check this presentation on benefits of continuous integration for a more extensive discussion.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a few software developers working for my projects and I would like to provide them a way to register time they spent on real development.
There is good will to register development hours, no force, but we try to avoid techniques like excel sheets register because this is so uncomfortable.
I can track svn commits, but this is unreliable. Developers also helps supporting different projects during the day, so assuming they work on one project by whole day is not true.
I've seen utilities that popups a message every hour to confirm the project you're working on but this is annoying.
Some kind of active-window-title-anaylzer might help (you can get solution name from there in the case of Visual Studio) but I have no experience with such idea.
If you have any experience with programmers/designers work hours registration, please share with me.
Thanks
Its a good question, and the very best way you can measure hours spent on a development project is not to measure hours spent at all.
You say there is good will to register hours, but I have my doubts. Realistically, from a developers point of view, excessive time management is a distraction for most (perhaps ALL developers on the planet).
I can understand why time is measured so excessively on ODesk. There is a good reason for this, because project time is paid up front by the client to ODesk, and the developer needs to prove to ODesk hours worked. Payment is also guaranteed, and its unlikely oDesk providers and developers ever meet in real life, so there is no trust.
Since its unlikely you're paying your developers upfront, and its more likely you have better trust established, you need to perhaps switch your attention from a management strategy that's cludgy and annoying, to something way more useful.
Yes ladies and gentleman I am talking about Scrum. Throw any notion of keeping hour per hour tabs of your developers out the window (they'll love you for it). And instead introduce Scrum Management into the scenario.
Create some sprints (milestones) for your product development, and in that have a list of iterations (batched deliverable s), try keep your iterations down to a weekly period.
Create a product backlog, and make sure you're aware who exactly is working on what. Find someone on the team to act as a scrum master on your behalf, or to take on this responsibility yourself. Make sure you have daily feedback meetings, keep them short and focused, to identify any risks that might get in the way of deliverable s. Let the developers more or less drive the timing process, and get realistic estimates for tasks.
Read a book or 2 on scrum, and get others and team members involved in the learning curve. Tweak the base scrum methodology to best suit your particular style of management, and I assure you, you will have a very happy team.
Measure your time in man days, and try avoid getting on the back of a developer for hour by hour progress...
There're various time tracking software tools as you have probably already seen from doing google searches. But in all honestly you are asking for the Holy Grail of time tracking. For example do you consider a developer staring at her code and thinking as development time? She might stare at the screen for 1 hour and only type for 10 minutes. In this case it looks like they don't work much when really they worked for 1 hour and 10 minutes. I don't mean to say what you asking for isn't a valid thing to want it just seems to be one of those problems that doesn't have a perfect solution.
Good luck.
I think you're asking the wrong question and are headed for a slippery slope. There's so much that goes into development that has nothing to do with actually coding.
I think a better solution if you're deadset on tracking something is to track the time spent on activities NOT related to development. Of course there's some grey area there too. For example, a meeting to discuss user requirements should probably be counted towards development even though no coding will get done.
you need something like this dashboard to measure time on task. The only way to know the real software developer time is to have then track it. That way when they switch tasks they stop the clock so to speak. I think the hardest thing would be getting the developers to use it as a measure of how long they work on a certain project or even code module, etc. If you can then use those metrics to reduce distractions and other time sucks, you might at least be able to get a decent swag about how much time they spend coding versus email versus talking to other developers, etc.
If you're trying to measure what the developer has as their active window, you have to assume goodwill, because any decent developer can sneak around that if you're trying to turn the screws on them. I spend about a third of my "development time" in Firefox looking at references, for example.
Maybe ask the developers just to keep a log, so you know where their time is going? Whereas that's not ideal, you're never going to do much better than that.
If you are trying to measure the time spent on distractions and disturbances and other task, would it not be in your developers interest to give you this information willingly?
You said somewhere that you are implementing scrum.
If you really have to either take it up at the daily scrum, making it a part of the ritual, or add a very short daily meeting at the end of the day. Let the developers guesstimate how much of the day was spent on distractions and disturbances and other tasks. To me that feels like it will be as close to "correct" as any other way of measuring, considering the difficulties involved.
So, instead of having the developers note down the time, make it the scrummaster's job to sum it up, and make this as painless as possible for the devs. Make sure that the developers gain something tangible from doing it as well, otherwise it is going to end up on the impediment list awfully quickly.
As Dean J implied, you have to trust the developers anyway.
it depends on your IDE - if you are using Eclipse then I recommend using Mylyn plugin. You can measure the time spent on each task. Tasks can be fetched from every famous task repositories i.e. Tuleap. details here
User only need to put the task in Active mode - and deactivate when task is finished to able to stop the timer. I think Mylyn will support such process - if a status of a task changes then this would trigger the active mode (if closed then deactivate the task)
Sometimes development involve using a browser, or a terminal. Eclipse can be used as a browser and as a terminal as well - so developer does not need to leave Eclipse - so almost every activity can be measured related to a task.
Try DotProject or XPlanner
An active window analyzer won't give you reliable results, because your developers will swap between programs (Outlook, File Explorer, version control, internet browser, etc.). Your proposed analyzer won't log that time, although it will very likely be part of the development time that the developer puts into the project (software development is not just coding in VS all the time).
Trying to measure a developer's work hours is the wrong notion. A proper question to ask is what is a programmer's effectiveness. That can't be measured by coding time, time spent sitting in front of a computer, or the like.
As Joel Spolsky put it well in a blog on software craftsmanship, software development "...is not a manufacturing process."
A related but somewhat different discussion appears in this SO article on an invasive programmer productivity measurement tool.
I definitely would not recommend you to use any time measuring software where the developer is forced. It is a massive distraction to developers' concentration.
Instead, the following simple techniques can be used:
Spreadsheet: For smaller projects or team of developers
There's probably nothing easier than to create and share a spreadsheet online, add project tasks to it, assign tasks to a developer, let the developers estimate hours for theirs tasks, let them update their task status(es) (very rough value between 0% and 100%) as they want, let them specify time (hours) it really took to complete the task.
So in the spreadsheet you may have columns:
task name, assigned to, estimated hours, real hours, done (%)
Google Drive spreadsheet may be the answer.
This is a very simple and fast method which distracts developer(s) minimally.
Scrum: For medium to large projects or team of developers
Tasks and the Scrum information are recorded and kept on a board in the office and/or a special Scrum application can be used. A good web Scrum application is Pivotal Tracker which I would recommend for any size of project or team.
More about Scrum:
http://en.wikipedia.org/wiki/Scrum_(development)
In both cases, the product owners (clients or those who deal with clients), project managers and all developers can better and faster:
communicate
estimate
see progress of the team and all individuals involved
We have 3 branches {Dev,Test,Release} and will have continuous integration set up for each branch. We want to be able to assign build qualities to each branch i.e. Dev - Ready for test...
Has anyone any experience with this that can offer any advice/best practice approach?
We are using TFS 2008 and we are aware that it has Build Qualities built in. It is just when to apply a quality and what kind of qualities people use is what we are looking for.
Thanks
:)
Your goal here is to get the highest quality possible in each branch, balanced against the burden of verifying that level of quality.
Allowing quality to drop in any branch is always harmful. Don't think you can let the Dev branch go to hell and then fix it up before merging. It doesn't work well, for two reasons:
Recovering is harder than you expect. When a branch is badly broken, you don't know how broken it really is. That's because each issue hides others. It's also hard to make any progress on any issue because you'll run in to other problems along the way.
Letting quality drop saves you nothing. People sometimes say "quality, cost, schedule - pick any 2" or something like that. The false assumption here is that you "save" by allowing quality to slip. The problem is that as soon as quality drops, so does "velocity" - the speed at which you get work done. The good news is that keeping quality high doesn't really cost extra, and everyone enjoys working with a high-quality code base.
The compromise you have to make is on how much time you spend verifying quality.
If you do Test Driven Development well, you will end up with a comprehensive set of extremely fast, reliable unit tests. Because of those qualities, you can reasonably require developers to run them before checking in, and run them regularly in each branch, and require that they pass 100% at all times. You can also keep refactoring as you go, which lets you keep velocity high over the life of the project.
Similarly, if you write automated integration / customer tests well, so they run quickly and reliably, then you can require that they be run often, as well, and always pass.
On the other hand, if your automated tests are flaky, if they run slowly, or if you regularly operate with "known failures", then you will have to back off on how often people must run them, and you'll spend a lot of time working through these issues. It sucks. Don't go there.
Worst case, most of your tests are not automated. You can't run them often, because people are really slow at these things. Your non-release branch quality will suffer, as will the merging speed and development velocity.
Assessing the quality of a build in a deterministic and reproducible way is definitely challenging. I suggest the following:
If you are set up to do automated regression testing then all those tests should pass.
Developers should integration test each of their changes using an official Dev build newly installed on an official and clean test rig and give their personal stamp of approval.
When these two items are satisfied for a particular Dev build you can be reasonably certain that promoting this build to Test will not be wasting the time of your QA team.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I guess everybody agrees that having continuous builds and continuous integration is beneficial for quality of the software product. Defects are found early so they can be fixed ASAP. For continuous builds, which take several minutes, it is usually easy to find the one who caused the defect. However, for nightly integration tests, which take long time to run, this may be a challenge. Here are specifics of the situation, for which I'm looking for an optimal solution:
Running integration tests takes more than 1 hour. Therefore they are run overnight. Multiple check-ins happen every day (team of about 15 developers) so it is sometimes difficult to find the "culprit" (if any).
Integration testing environment depends on other environments (web services and databases), which may fail from time to time. This causes integration tests to fail.
So how to organize the team so that these failures are fixed early? In my opinion, there should be someone appointed to DIAGNOSE the defect(s). This should be the first task in the morning. If he needs an expertise of others, they should be readily available. Once the source (component, database, web service) of the failure is determined, the owner should start fixing it (or another team should be notified).
How to appoint the one who diagnoses the defects? Ideally, someone would volunteer (ha ha). This won't happen very often, I'm afraid. I've heard other option - whoever comes first to the office should check the results of the nightly builds. This is OK, if the whole team agrees. However, this rewards those who come late. I suppose that this role should rotate in the team. The excuse "I don't know much about builds" should not be accepted. Diagnostics of the source of the failure should be rather straightforward. If it is not, then adding more of diagnostics logging to the code should improve the visibility into integration test failures.
Any experience in this area or suggestions for improvements of the above approach?
A famous policy about broken nightly builds, attributed to Microsoft, is that the guy whose commit broke the build becomes responsible for maintaining nightly builds until someone else breaks it.
That makes sense, since
everyone makes mistakes, so the necessary rotation will occur (empowered with Least-Recently-Used choice pattern for ambiguous cases)
it encourages people to write better code
What I generally do (I've done it for a team of between 8 and 10 persons) is two have one guy that checks the build, as the first thing he does in the morning -- some would say he is responsible for QA, I suppose.
If there is a problem, he's responsible for finding out what/how -- of course, he can ask help from the other members of the team, if needed.
This means there's at least one member of the team that has to have a great knowledge of the whole application -- but that's not a bad thing anyway : it'll help diagnose problems the day that application is used in production and suffers a failure.
And instead of having one guy to do that, I like when there are two : one for one week, the other for the second week -- for instance ; this way, there are greater chances of always having someone who can diagnose problems, even if one of them is in holidays.
As a sidenote : the more useful things you log during the build, the easier it is to find out what went wrong -- and why.
Why not let everyone in the team check the build every morning ?
Well, not every one wants to, first of all -- and that will be done better if the one doing it likes what he does
And you don't want 10 people spending half an hour every day on that ^^
In your case I'd suggest whoever is in charge of the CM. If that is the manager or technical lead who has too many responsibilities why not give it to a junior developer? I wish someone had forced me early in my career to get to know source control more thoroughly. Not only that but looking at other people's code to track down a source of error is a real skill building or knowledge learning exercise. They say you gain the most from looking at other people's code and I'm a firm believer of this.
Pair experienced with unexperienced
You may want to consider having pairs of developers diagnose the broken builds. I've had good luck with that. Especially if you pair team members who have little familiarity with the build system and team members who have significant familiarity. This may reduce the possibility of team members saying "I don't know much about builds" as a way to try and get out of the duty, and it will decrease your bus number and increase collective ownership.
Give the team a choice of your assigned solution or one of their own making
You could put the issue to your team and ask them to offer a solution. Tell them that if they don't come up with a workable solution, you will make a weekly schedule, assigning one pair per day and making sure that everyone has the opportunity to participate.
Practice continuous integration so you don't need infrequent mega-builds
** you can distribute builds between machines if it's too slow for one machine to do
Use a build status monitor so that whoever checked something in can be made responsible for build failures.
Have an afternoon check-in deadline
Either:
Nobody checks-in after 5pm
or
Nobody checks-in after 5pm unless they're prepared to stay at work until their build passes as green - even if that means working on, committing a fix and waiting for a rebuild.
It's much easier to enforce and obey the first form, so that's what I'd follow.
Members of a former team of mine actually got phoned up and told to return to work to fix the build... and they did.
I'd be tempted to suggest splitting things up in either of a couple of ways:
Time split - Assuming that the tests could run twice a night, why not run the tests against the code at 2 different time points,i.e. all the check-ins up to X p.m. and then the remainder, so that could help narrow down where the problem is.
Team split - Can the code be split into smaller pieces so that the tests could be run on different machines to help narrow down which group should dig into things?
This assumes you could run the tests multiple times and divide things up in such a way so it is a rough idea.
We have a stand-up meeting every morning, before starting work. One of the things on the checklist is the status of the nightly build. Our build system spits out an email after it's run, reporting the status, so this is easy to find out - as it happens, it goes to one guy, but really it should go to everyone, or be posted onto the project wiki.
If the build is broken, then fixing it becomes a top-priority task, to be handled like any other task, which means that we'll decide at the standup who is going to work on it, and then they go and do so. We do pair programming, and will usually treat this as a pair task. If we're short-staffed, we might assign one person to investigate the breakage, and then pair someone with him to fix it a bit later.
We don't have a formal mechanism for assigning the task: we're a small team (six people, usually), and have collective code ownership, so we just work it out between ourselves. If we think one particular pair's checkin broke the build, it would usually be them who fix it. If not, it could be anyone; it's usually decided by seeing who isn't currently in the middle of some other task.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Skunk Works Project: A project carried out by one part of a company without the knowledge of the remainder of the company.
Looking for stories about any skunk works projects you've worked on or initiated:
Was it successful?
Were you found out?
Were you punished or rewarded?
How did you fund it?
How did you staff it?
How long did it take to finish, compared to above-ground projects?
What was the cost, compared to above-ground projects?
Was it formally adopted?
Excellent question. Very important question.
Geoffrey Moore (Inside the Tornado, Crossing the Chasm, etc.) has written that, as he lectured around the world, he had one question he would ask every client (including those like GE, Motorola, etc.)
The question was:
"Can you give me even one example of a truly ground-breaking, paradigm shifting innovation that has ever come out of your company's research or product development process?"
At least at the point where I read the quote, not one example had been identified. And in most cases, such products or services had been conceived, designed, and largely developed by small groups of people who at best were ignored, but were often actively opposed by R&D.
Not sure if this qualifies as a "skunk works" project, but here's a great story from This American Life. It's Act Two of this March 2005 episode.
Amy O'Leary tells the story of a
software writer at Apple Computer
whose job contract ends, but he
refuses to go away. He continues to
show up at work every day, sneaking in
the front door, hiding out in empty
offices, and putting in long hours on
a project the company cancelled. There
were no meetings, no office politics,
no managers interfering with his work.
Soon, he had written a perfect piece
of software. His final problem is
figuring out how to secretly install
it in Apple's new computers without
anyone noticing. (12 minutes)
Great listening for anyone, but especially programmers in this case.
I have actually done a number of these "Secret" project type situations. Were they are not fully supported when started, and kept very secret. I'll discuss on of these along the lines of your questions.
Was it successful?
Yes, the system developed was put into place 3 years ago, and has been functioning ever since.
Were you found out?
Yes, it was discovered, and it was part of the overall plan.
Were you punished or rewarded?
With a working prototype we were rewarded, given the extra resources needed, and eventually the system was put into place for the entire company to use.
How did you fund it?
It was a development activity that was simply completed in down-time and personal time by various people.
How did you staff it?
See the above.
How long did it take to finish, compared to above-ground projects?
We did the entire thing in about 4 months, with dedicated resources a single person could have done it in about 2 months, or a team in about 3-4 weeks.
What was the cost compared to above-ground projects?
No cost, using down-time that was already "wasted" to be put to effective use. All existing infrastructure for the final incarnation was already there.
Was it formally adopted?
Yes, it is a solid part of the business plan now, and has been for over 3 years.
We are currently in this situation, although, admittedly, the project will not have high visibility aspects - even though everyone will be eventually using it.
As part of a preparation to rebuild most of our enterprise applications, we have started developing an application framework that will be the basis for all of the replacement applications. We already have "bench projects" and "proof of concept" time that everyone is aware that we use to evaluate concepts. How it is different this time, though, is that we are actively developing a full project.
Was it successful? - We have not rolled out the full framework yet, but since it is modular, have been rolling out pieces in the legacy applications. Most of these are focused on stability and reporting/logging concerns. So far, they have exceeded expectations, allowing us to react to issues more rapidly, as well as eliminating some previously recurring ones.
Were you found out? - Well, this project has become one of the worst kept secrets I have ever seen. While there are quite a few people who have heard the name of the project thrown around a bit, I don't think anyone outside of a few of my developers and the testing team really know what it is about.
Were you punished or rewarded? - We haven't considered either side of this, yet. Unless the framework would cause negative effects, I doubt we would be punished for it. However, even if it is a success, the reward will be that no one notices anything other than improved applications.
How did you fund it? - Like mentioned before, bench time between other projects and inclusion in "proof of concept" work. I have also been putting some of my own personal time into it on my commute, since it will lay the groundwork for how all of my developers interact with the applications in the future.
How did you staff it? - I started with a series of small proof of concepts within the legacy codebase as part of "maintaining" the applications. Going in and fixing a defect often involved analytical steps on what could be done to prevent things from happening or improve the experience in the future. These were eventually extracted and refactored in their own assemblies, which became the beginning of the framework. We are now placing "covert" projects into our iterations that help flesh out these ideas through my developers, and we are now extracting and refactoring their efforts based upon the success of the implementation.
How long did it take to finish, compared to above-ground projects? - Yet to be determined. Since this is not an official project, so far it has really cost nothing. Bench time and "proof of concept" work is standard inclusion. The fact that we are essentially creating something from this time instead of throwing it away is gravy.
What was the cost, compared to above-ground projects? - Once again, yet to be determined. I imagine that the up-front cost will be relatively small compared to larger projects. Considering that this is a framework to contain commonly used extensions and improve the ability and quality of the developer to work efficiently, it will probably pay for itself before it is finished due to time-saving, improved practices, and reduction in defects.
Was it formally adopted? - The developers have embraced the concept. My immediate management is chomping at the bit. My management peers are excited, if not a little confused on what it will do. The measurement will be the success of the applications that are built off of the framework - which is some ways away still.
I built a tool to validate schema changes to the target DB at work. prior to my tool we did it all by hand with fugly scripts that DBA's at client sites had to run. my tool started tracking the structure of the database to know if certain things would work out. I got frustrated with having to hand check all this stuff or suffer from the errors inevtiable in doing things by hand so I built my validator and here is its story...
Was it successful?
Yes
Were you found out?
yes. Part of the aspect of a skunk works project is that it has to surface eventually.
Were you punished or rewarded?
Punished initially - why not work on mainstream activities. But rewarded once the benefit was made evident and product errors were reduced. Then it was heralded- everyone loves a winner.
How did you fund it?
For the love of coding it up and making my life easier - so no direct funds needed. Unless was part of managements plan to have a secret project i cannot see how this would be otherwise.
How did you staff it?
I coded alone as a lone developer on a grassy knoll with my laptop.
How long did it take to finish, compared to above-ground projects?
Not comparable. my skunksworks effort was maybe a year of tinkering. If we had set out to do it directly i cant imagine it would have taken less than 2 months directly but I do not know since thats not how it morphed. Downtime to think and plan may have made it faster in the end compared direct planning upfront.
What was the cost, compared to above-ground projects?
Undetermined - As I mentioned, given that I had down time to think and plan it was able to evovle in the direction I wanted without schedule/result pressures. In a shorter or more resource involved project we probably would have made some mistakes in rushin to get to some M1, M2 etc. Besides if it didnt work out, its would have been as if it had never happened as I could have folded up the tents and gone quietly into the night.
Was it formally adopted?
My project is a key part of the product build at my work so I would say its entrenched.
Hmm... I did one of these today actually.
We've got no real backup system in place. At present I get the highly enjoyable task of backing up 100GB of SVN repositories using svn hotcopy and .tar.gz files, while trying to juggle them across two or three NFS shares with limited disk space to get to the server with the backup disk. That's in the best case - i.e. when I can be bothered to babysit the process for 2 hours.
Since that's bound to end in catastrophe sooner or later I did a git svn clone on the largest one straight onto the backup server, then cloned that to my own machine and kicked out the svn working copy I was using. I've gained about 1GB of free space on my machine, given the most important backups some redundancy, and reduced a 15 minute svn st to a 30 second git status. And will I get complained at for it? Probably...
Generally the answers here have been success stories, so I thought I'd share my recent experience sitting just outside such a project that did not go so well.
How did you fund it?
How did you staff it?
The project started when my manager identified a potential employee, lets call him Fred, who had a pet project in our field. We don't pay well, and they they agreed that Fred would be hired and would work almost full time on the project, which they would eventually introduce to the business.
So Fred's started work on the project, known only to Fred's team but not to management or other parts of the business. Fred is a developer, and the work was more-or-less pure development, plus contributions to an underlying open-source project.
Was it successful?
Not really. Fred was working on it alone, and I think would have spent 12-18 months on it. Progress reports to the team consisted of describing whatever bug he was fixing that week. Occasional attempts were made to interest one or two higher-ups in the organization, but they never really went anywhere. Fred was supposed to put together a plan to finish and roll out the project so it could be introduced to the organization, but there always seemed to be some reason it was never done.
Were you found out?
Word slowly filtered out as Fred an the manager tried to interest more people in what they were doing.
Eventually we got restructured, and our new director wanted to know what everyone was working on, and the project was revealed to him. However, it was apparently not explained very well, since the new director wound up asking me (and others in our team I am sure) what exactly Fred's project was?
Were you punished or rewarded?
Eventually the new director froze all funding for the project and Fred was reassigned to work on other projects. That's the current status as far as I know.
How long did it take to finish, compared to above-ground projects?
Was it formally adopted?
It was not finished and it was not adopted.
What was the cost, compared to above-ground projects?
The ostensible cost was Fred's time.
However, there were other costs.
First, Fred and his project became a it of a joke in our team, and later in the teams we work with. What was he doing? Why was he doing it? Why was there no progress? Fred's reputation suffered. "Fred's project" became an in-joke for a project that was going nowhere.
Second, the eventual revelation of such a long-running but hidden project reflected poorly on our manager, and by extension on our whole team.
Third, resentment grew. Why was this guy working on his pet project when there was so much real work to be done? We are a small but busy team and we could have used a developer on any number of other projects.
In the end, I think this project has had consequences for our team's standing and dynamic. I occasionally talk it over with team members, when we're away from the office. Initially (and at the time) we were very critical of Fred, who can be an irritating guy, and who does not take criticism well, and who promised something he couldn't deliver. More recently, we've been critical of our boss. This was not a good way to run a project, and it was obvious from very early on that Fred did not have the skillset to do this work on his own and he would not seek or take advice. It was unfair to Fred that he was put in that position and left in it for so long. Lately I have wondered if I should have raised my concerns more forcefully. Though we did push Fred and our manager on what the project was and where it was going, we did not take it any further than our team. Having said that, I cannot imagine a good outcome even if we had.
Finally, I'd like to say that Fred is a smart guy and the project was not a bad one. It could have been successful (some parts have since come out in competing projects -- inferior competitors that actually delivered).
If this project had been done above board, and Fred had been working with a decent project manager and had a good communicator on the team, it could well have found a champion and delivered something great. Either that or it could have been killed a lot sooner.
I did one of these. It's actually how I ended up programming.
I was responsible for maintaining a legacy, er... "database". I won't go into gory details but it was the usual evil application. The company pretty much ran on it, it would sometimes go down for days. At the time the IS director (a friend) was actively looking for replacements, talking to large consulting organizations, etc.etc. but management was committted/emotionally invested in the existing system. I volunteered (to the IS director) to try to rewrite is (well, more like he asked if anyone was interested in trying to deal with this mess and I volunteered because I was bored). We had no real programmers on staff, and I'd only written a few small ad-hoc things. I had no idea how little I knew.
Finished the thing in about 8 months or maybe a year (this was a while ago, don't remember exactly).
Was it successful?
yes, worked as advertised.
Were you found out?
It initially started as a sort of super-secret, cloak and dagger thing. Kind of silly in retrospect, but it made it more fun. About halfway through it just started to become more obvious that that was what I was doing, and as it turned out the idea was supported. Writing this thing eventually became my job.
Were you punished or rewarded?
Rewarded
How did you fund it? / How did you staff it?
The success of it was pretty much due to the support of my boss, who made sure I had the time and resources I needed to do it.
Was it formally adopted?
Yes, we eventually ran the company on it.