Skunk Works Projects [closed] - project-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Skunk Works Project: A project carried out by one part of a company without the knowledge of the remainder of the company.
Looking for stories about any skunk works projects you've worked on or initiated:
Was it successful?
Were you found out?
Were you punished or rewarded?
How did you fund it?
How did you staff it?
How long did it take to finish, compared to above-ground projects?
What was the cost, compared to above-ground projects?
Was it formally adopted?

Excellent question. Very important question.
Geoffrey Moore (Inside the Tornado, Crossing the Chasm, etc.) has written that, as he lectured around the world, he had one question he would ask every client (including those like GE, Motorola, etc.)
The question was:
"Can you give me even one example of a truly ground-breaking, paradigm shifting innovation that has ever come out of your company's research or product development process?"
At least at the point where I read the quote, not one example had been identified. And in most cases, such products or services had been conceived, designed, and largely developed by small groups of people who at best were ignored, but were often actively opposed by R&D.

Not sure if this qualifies as a "skunk works" project, but here's a great story from This American Life. It's Act Two of this March 2005 episode.
Amy O'Leary tells the story of a
software writer at Apple Computer
whose job contract ends, but he
refuses to go away. He continues to
show up at work every day, sneaking in
the front door, hiding out in empty
offices, and putting in long hours on
a project the company cancelled. There
were no meetings, no office politics,
no managers interfering with his work.
Soon, he had written a perfect piece
of software. His final problem is
figuring out how to secretly install
it in Apple's new computers without
anyone noticing. (12 minutes)
Great listening for anyone, but especially programmers in this case.

I have actually done a number of these "Secret" project type situations. Were they are not fully supported when started, and kept very secret. I'll discuss on of these along the lines of your questions.
Was it successful?
Yes, the system developed was put into place 3 years ago, and has been functioning ever since.
Were you found out?
Yes, it was discovered, and it was part of the overall plan.
Were you punished or rewarded?
With a working prototype we were rewarded, given the extra resources needed, and eventually the system was put into place for the entire company to use.
How did you fund it?
It was a development activity that was simply completed in down-time and personal time by various people.
How did you staff it?
See the above.
How long did it take to finish, compared to above-ground projects?
We did the entire thing in about 4 months, with dedicated resources a single person could have done it in about 2 months, or a team in about 3-4 weeks.
What was the cost compared to above-ground projects?
No cost, using down-time that was already "wasted" to be put to effective use. All existing infrastructure for the final incarnation was already there.
Was it formally adopted?
Yes, it is a solid part of the business plan now, and has been for over 3 years.

We are currently in this situation, although, admittedly, the project will not have high visibility aspects - even though everyone will be eventually using it.
As part of a preparation to rebuild most of our enterprise applications, we have started developing an application framework that will be the basis for all of the replacement applications. We already have "bench projects" and "proof of concept" time that everyone is aware that we use to evaluate concepts. How it is different this time, though, is that we are actively developing a full project.
Was it successful? - We have not rolled out the full framework yet, but since it is modular, have been rolling out pieces in the legacy applications. Most of these are focused on stability and reporting/logging concerns. So far, they have exceeded expectations, allowing us to react to issues more rapidly, as well as eliminating some previously recurring ones.
Were you found out? - Well, this project has become one of the worst kept secrets I have ever seen. While there are quite a few people who have heard the name of the project thrown around a bit, I don't think anyone outside of a few of my developers and the testing team really know what it is about.
Were you punished or rewarded? - We haven't considered either side of this, yet. Unless the framework would cause negative effects, I doubt we would be punished for it. However, even if it is a success, the reward will be that no one notices anything other than improved applications.
How did you fund it? - Like mentioned before, bench time between other projects and inclusion in "proof of concept" work. I have also been putting some of my own personal time into it on my commute, since it will lay the groundwork for how all of my developers interact with the applications in the future.
How did you staff it? - I started with a series of small proof of concepts within the legacy codebase as part of "maintaining" the applications. Going in and fixing a defect often involved analytical steps on what could be done to prevent things from happening or improve the experience in the future. These were eventually extracted and refactored in their own assemblies, which became the beginning of the framework. We are now placing "covert" projects into our iterations that help flesh out these ideas through my developers, and we are now extracting and refactoring their efforts based upon the success of the implementation.
How long did it take to finish, compared to above-ground projects? - Yet to be determined. Since this is not an official project, so far it has really cost nothing. Bench time and "proof of concept" work is standard inclusion. The fact that we are essentially creating something from this time instead of throwing it away is gravy.
What was the cost, compared to above-ground projects? - Once again, yet to be determined. I imagine that the up-front cost will be relatively small compared to larger projects. Considering that this is a framework to contain commonly used extensions and improve the ability and quality of the developer to work efficiently, it will probably pay for itself before it is finished due to time-saving, improved practices, and reduction in defects.
Was it formally adopted? - The developers have embraced the concept. My immediate management is chomping at the bit. My management peers are excited, if not a little confused on what it will do. The measurement will be the success of the applications that are built off of the framework - which is some ways away still.

I built a tool to validate schema changes to the target DB at work. prior to my tool we did it all by hand with fugly scripts that DBA's at client sites had to run. my tool started tracking the structure of the database to know if certain things would work out. I got frustrated with having to hand check all this stuff or suffer from the errors inevtiable in doing things by hand so I built my validator and here is its story...
Was it successful?
Yes
Were you found out?
yes. Part of the aspect of a skunk works project is that it has to surface eventually.
Were you punished or rewarded?
Punished initially - why not work on mainstream activities. But rewarded once the benefit was made evident and product errors were reduced. Then it was heralded- everyone loves a winner.
How did you fund it?
For the love of coding it up and making my life easier - so no direct funds needed. Unless was part of managements plan to have a secret project i cannot see how this would be otherwise.
How did you staff it?
I coded alone as a lone developer on a grassy knoll with my laptop.
How long did it take to finish, compared to above-ground projects?
Not comparable. my skunksworks effort was maybe a year of tinkering. If we had set out to do it directly i cant imagine it would have taken less than 2 months directly but I do not know since thats not how it morphed. Downtime to think and plan may have made it faster in the end compared direct planning upfront.
What was the cost, compared to above-ground projects?
Undetermined - As I mentioned, given that I had down time to think and plan it was able to evovle in the direction I wanted without schedule/result pressures. In a shorter or more resource involved project we probably would have made some mistakes in rushin to get to some M1, M2 etc. Besides if it didnt work out, its would have been as if it had never happened as I could have folded up the tents and gone quietly into the night.
Was it formally adopted?
My project is a key part of the product build at my work so I would say its entrenched.

Hmm... I did one of these today actually.
We've got no real backup system in place. At present I get the highly enjoyable task of backing up 100GB of SVN repositories using svn hotcopy and .tar.gz files, while trying to juggle them across two or three NFS shares with limited disk space to get to the server with the backup disk. That's in the best case - i.e. when I can be bothered to babysit the process for 2 hours.
Since that's bound to end in catastrophe sooner or later I did a git svn clone on the largest one straight onto the backup server, then cloned that to my own machine and kicked out the svn working copy I was using. I've gained about 1GB of free space on my machine, given the most important backups some redundancy, and reduced a 15 minute svn st to a 30 second git status. And will I get complained at for it? Probably...

Generally the answers here have been success stories, so I thought I'd share my recent experience sitting just outside such a project that did not go so well.
How did you fund it?
How did you staff it?
The project started when my manager identified a potential employee, lets call him Fred, who had a pet project in our field. We don't pay well, and they they agreed that Fred would be hired and would work almost full time on the project, which they would eventually introduce to the business.
So Fred's started work on the project, known only to Fred's team but not to management or other parts of the business. Fred is a developer, and the work was more-or-less pure development, plus contributions to an underlying open-source project.
Was it successful?
Not really. Fred was working on it alone, and I think would have spent 12-18 months on it. Progress reports to the team consisted of describing whatever bug he was fixing that week. Occasional attempts were made to interest one or two higher-ups in the organization, but they never really went anywhere. Fred was supposed to put together a plan to finish and roll out the project so it could be introduced to the organization, but there always seemed to be some reason it was never done.
Were you found out?
Word slowly filtered out as Fred an the manager tried to interest more people in what they were doing.
Eventually we got restructured, and our new director wanted to know what everyone was working on, and the project was revealed to him. However, it was apparently not explained very well, since the new director wound up asking me (and others in our team I am sure) what exactly Fred's project was?
Were you punished or rewarded?
Eventually the new director froze all funding for the project and Fred was reassigned to work on other projects. That's the current status as far as I know.
How long did it take to finish, compared to above-ground projects?
Was it formally adopted?
It was not finished and it was not adopted.
What was the cost, compared to above-ground projects?
The ostensible cost was Fred's time.
However, there were other costs.
First, Fred and his project became a it of a joke in our team, and later in the teams we work with. What was he doing? Why was he doing it? Why was there no progress? Fred's reputation suffered. "Fred's project" became an in-joke for a project that was going nowhere.
Second, the eventual revelation of such a long-running but hidden project reflected poorly on our manager, and by extension on our whole team.
Third, resentment grew. Why was this guy working on his pet project when there was so much real work to be done? We are a small but busy team and we could have used a developer on any number of other projects.
In the end, I think this project has had consequences for our team's standing and dynamic. I occasionally talk it over with team members, when we're away from the office. Initially (and at the time) we were very critical of Fred, who can be an irritating guy, and who does not take criticism well, and who promised something he couldn't deliver. More recently, we've been critical of our boss. This was not a good way to run a project, and it was obvious from very early on that Fred did not have the skillset to do this work on his own and he would not seek or take advice. It was unfair to Fred that he was put in that position and left in it for so long. Lately I have wondered if I should have raised my concerns more forcefully. Though we did push Fred and our manager on what the project was and where it was going, we did not take it any further than our team. Having said that, I cannot imagine a good outcome even if we had.
Finally, I'd like to say that Fred is a smart guy and the project was not a bad one. It could have been successful (some parts have since come out in competing projects -- inferior competitors that actually delivered).
If this project had been done above board, and Fred had been working with a decent project manager and had a good communicator on the team, it could well have found a champion and delivered something great. Either that or it could have been killed a lot sooner.

I did one of these. It's actually how I ended up programming.
I was responsible for maintaining a legacy, er... "database". I won't go into gory details but it was the usual evil application. The company pretty much ran on it, it would sometimes go down for days. At the time the IS director (a friend) was actively looking for replacements, talking to large consulting organizations, etc.etc. but management was committted/emotionally invested in the existing system. I volunteered (to the IS director) to try to rewrite is (well, more like he asked if anyone was interested in trying to deal with this mess and I volunteered because I was bored). We had no real programmers on staff, and I'd only written a few small ad-hoc things. I had no idea how little I knew.
Finished the thing in about 8 months or maybe a year (this was a while ago, don't remember exactly).
Was it successful?
yes, worked as advertised.
Were you found out?
It initially started as a sort of super-secret, cloak and dagger thing. Kind of silly in retrospect, but it made it more fun. About halfway through it just started to become more obvious that that was what I was doing, and as it turned out the idea was supported. Writing this thing eventually became my job.
Were you punished or rewarded?
Rewarded
How did you fund it? / How did you staff it?
The success of it was pretty much due to the support of my boss, who made sure I had the time and resources I needed to do it.
Was it formally adopted?
Yes, we eventually ran the company on it.

Related

How important are development Time Entries? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for my final year project (BSc Software Engineering) I am looking at time entries for software applications, and whether they accurately reflect the development of the project, and whether they can be improved or automated.
For this I will be prototyping a plug-in for Visual Studio using VSPackages that will automatically track which files are being worked on, assigning the files to tasks and projects. The plug-in will also track periods of inactivity within Visual Studio.
This will then be backed up via a simple Web Application for non-technical staff to pull reports from, so that projects can be tracked very accurately.
I currently work in a small company (10 people) and cannot get the large set of data I need to gain a good conclusion from. For this reason I ask if it would be possible to discuss the topic below and if you have a few spare minutes to fill in my questionnaire and email me the result to the address contained within the document:
http://www.mediafire.com/?dmrqmwknmty
Cheers,
MiG
In answer to your question, development time entries are important. But you can't measure them through a single IDE, nor indeed through any software. The development process is a complex one involving discussions, planning around a whiteboard, diagrams sketched on a piece of paper, research on the Internet, etc etc.
Read Jeff Atwood's excellent post on laziness and the other posts he refers to there. A good, successful developer spends time away from the IDE making sure they don't spend 90% of their working day reinventing the wheel, or 50% of their day heading down the wrong track because they haven't thought the design through.
I find the basic idea interesting, even though automated time tracking has flaws, just as measuring the number and frequency of commits to a project (as done on ohloh.net for example) can be a very misleading indicator about its activity.
However, the reality is that time worked is the basis for billing, and needs to be measured somehow. There are already solutions for this, though.
Take a look at
Grindstone or
AllNetic Working Time Tracker
(there are many more out there but these two I know well).
They work independently from what tool(s)/IDEs I am using, they can detect my absence/presence on the computer and prompt me about how I want to file the time, and they can do all the necessary reporting. It is also easy to add and manage filed entries.
What would your Visual Studio Plugin achieve that these solutions don't offer already?
Time spent developing in an IDE provides only a (sometimes very) partial metric of how much time a developer works.
I have been using FogBugz version 7 lately at work, and it has a feature that allows developers to estimate how long it will take them to finish a case. The developer can then use the software to say, "I am working on this case". Then the clock will count down until it reaches zero, based on the developer's working schedule (including days off), the hours that they say they are in the office, and the percentage of their time that they estimate they are working on cases.
But as a developer, I know that I can easily get sidetracked by more important cases. I also know that I spend a good deal of time working on the cases using tools other than the IDE - such as testing in MbUnit, looking for error message explanations online, or giving status to people who ask me why I have not finished working on a bug yet. And I've also been in places where I spent half the typical day - or more - in meetings or in a lab doing my work on a remote machine somewhere else. When I'm at my desk, I could be using my computer to map out ideas for the work I'm doing, or just pen and paper.
So there are a lot of variables to consider when you ask the question, "Is the guy who sits over there really doing his work?" You would really need to look at more running applications than just Visual Studio 2008 (devenv.exe). You would probably need to look at activity for processes associated with a developer's test framework, text documents, remote desktop connections to other machines, and even Firefox. (Firefox would be a huge judgment call as to whether somebody is actually working!)
As part of your research for the project, I would also suggest researching some of the other time collection systems that are in use throughout your company's industry and comparing their features.
A bit off track, but you could potentially use this sort of data to illuminate areas of complexity (LOC), areas that are prone to change (frequent updates 'n' days apart), etc. but even this would be skewed by different programmers approaches to development.
We track all our time by project daily. It takes me less than five minutes a day to fill out what I was working on. This is not something that can be automated or even should be automated as it will never be anywhere close to accurate. Files aren't always associated with just one project and it would cost me more time to tell an application which files belong to which project that the five minutes it takes to fill our my timesheet. No one spends the entire day typing - there are meetings and phone calls and thinking (you know where you figure out what you want to type!), none of that will be captured in your automated system. What you are porposing will not be more accurate, it will be less accurate than requiring people fill in time sheets daily.
While time entries are important, figuring out how to organize it is where trouble comes into the picture. How well would non-technical staff understand the various phases of development in order to understand the data? I'll agree with the other responses that the IDE tracking is a terrible idea, especially if part of what is being done involves changing a database through a web browser which is what I have in my current big CMS project where we may have to change templates or create content to test out if the functionality works.
This also heavily ignores the gaming the system idea that can happen. What if I leave my IDE open in debug because I'm wanting to scan memory or do something else that requires the window be open to actually look at something but I could also have left my desk unless you are somehow tracking where I'm looking and sitting.

How to effectively measure developer's work hours? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a few software developers working for my projects and I would like to provide them a way to register time they spent on real development.
There is good will to register development hours, no force, but we try to avoid techniques like excel sheets register because this is so uncomfortable.
I can track svn commits, but this is unreliable. Developers also helps supporting different projects during the day, so assuming they work on one project by whole day is not true.
I've seen utilities that popups a message every hour to confirm the project you're working on but this is annoying.
Some kind of active-window-title-anaylzer might help (you can get solution name from there in the case of Visual Studio) but I have no experience with such idea.
If you have any experience with programmers/designers work hours registration, please share with me.
Thanks
Its a good question, and the very best way you can measure hours spent on a development project is not to measure hours spent at all.
You say there is good will to register hours, but I have my doubts. Realistically, from a developers point of view, excessive time management is a distraction for most (perhaps ALL developers on the planet).
I can understand why time is measured so excessively on ODesk. There is a good reason for this, because project time is paid up front by the client to ODesk, and the developer needs to prove to ODesk hours worked. Payment is also guaranteed, and its unlikely oDesk providers and developers ever meet in real life, so there is no trust.
Since its unlikely you're paying your developers upfront, and its more likely you have better trust established, you need to perhaps switch your attention from a management strategy that's cludgy and annoying, to something way more useful.
Yes ladies and gentleman I am talking about Scrum. Throw any notion of keeping hour per hour tabs of your developers out the window (they'll love you for it). And instead introduce Scrum Management into the scenario.
Create some sprints (milestones) for your product development, and in that have a list of iterations (batched deliverable s), try keep your iterations down to a weekly period.
Create a product backlog, and make sure you're aware who exactly is working on what. Find someone on the team to act as a scrum master on your behalf, or to take on this responsibility yourself. Make sure you have daily feedback meetings, keep them short and focused, to identify any risks that might get in the way of deliverable s. Let the developers more or less drive the timing process, and get realistic estimates for tasks.
Read a book or 2 on scrum, and get others and team members involved in the learning curve. Tweak the base scrum methodology to best suit your particular style of management, and I assure you, you will have a very happy team.
Measure your time in man days, and try avoid getting on the back of a developer for hour by hour progress...
There're various time tracking software tools as you have probably already seen from doing google searches. But in all honestly you are asking for the Holy Grail of time tracking. For example do you consider a developer staring at her code and thinking as development time? She might stare at the screen for 1 hour and only type for 10 minutes. In this case it looks like they don't work much when really they worked for 1 hour and 10 minutes. I don't mean to say what you asking for isn't a valid thing to want it just seems to be one of those problems that doesn't have a perfect solution.
Good luck.
I think you're asking the wrong question and are headed for a slippery slope. There's so much that goes into development that has nothing to do with actually coding.
I think a better solution if you're deadset on tracking something is to track the time spent on activities NOT related to development. Of course there's some grey area there too. For example, a meeting to discuss user requirements should probably be counted towards development even though no coding will get done.
you need something like this dashboard to measure time on task. The only way to know the real software developer time is to have then track it. That way when they switch tasks they stop the clock so to speak. I think the hardest thing would be getting the developers to use it as a measure of how long they work on a certain project or even code module, etc. If you can then use those metrics to reduce distractions and other time sucks, you might at least be able to get a decent swag about how much time they spend coding versus email versus talking to other developers, etc.
If you're trying to measure what the developer has as their active window, you have to assume goodwill, because any decent developer can sneak around that if you're trying to turn the screws on them. I spend about a third of my "development time" in Firefox looking at references, for example.
Maybe ask the developers just to keep a log, so you know where their time is going? Whereas that's not ideal, you're never going to do much better than that.
If you are trying to measure the time spent on distractions and disturbances and other task, would it not be in your developers interest to give you this information willingly?
You said somewhere that you are implementing scrum.
If you really have to either take it up at the daily scrum, making it a part of the ritual, or add a very short daily meeting at the end of the day. Let the developers guesstimate how much of the day was spent on distractions and disturbances and other tasks. To me that feels like it will be as close to "correct" as any other way of measuring, considering the difficulties involved.
So, instead of having the developers note down the time, make it the scrummaster's job to sum it up, and make this as painless as possible for the devs. Make sure that the developers gain something tangible from doing it as well, otherwise it is going to end up on the impediment list awfully quickly.
As Dean J implied, you have to trust the developers anyway.
it depends on your IDE - if you are using Eclipse then I recommend using Mylyn plugin. You can measure the time spent on each task. Tasks can be fetched from every famous task repositories i.e. Tuleap. details here
User only need to put the task in Active mode - and deactivate when task is finished to able to stop the timer. I think Mylyn will support such process - if a status of a task changes then this would trigger the active mode (if closed then deactivate the task)
Sometimes development involve using a browser, or a terminal. Eclipse can be used as a browser and as a terminal as well - so developer does not need to leave Eclipse - so almost every activity can be measured related to a task.
Try DotProject or XPlanner
An active window analyzer won't give you reliable results, because your developers will swap between programs (Outlook, File Explorer, version control, internet browser, etc.). Your proposed analyzer won't log that time, although it will very likely be part of the development time that the developer puts into the project (software development is not just coding in VS all the time).
Trying to measure a developer's work hours is the wrong notion. A proper question to ask is what is a programmer's effectiveness. That can't be measured by coding time, time spent sitting in front of a computer, or the like.
As Joel Spolsky put it well in a blog on software craftsmanship, software development "...is not a manufacturing process."
A related but somewhat different discussion appears in this SO article on an invasive programmer productivity measurement tool.
I definitely would not recommend you to use any time measuring software where the developer is forced. It is a massive distraction to developers' concentration.
Instead, the following simple techniques can be used:
Spreadsheet: For smaller projects or team of developers
There's probably nothing easier than to create and share a spreadsheet online, add project tasks to it, assign tasks to a developer, let the developers estimate hours for theirs tasks, let them update their task status(es) (very rough value between 0% and 100%) as they want, let them specify time (hours) it really took to complete the task.
So in the spreadsheet you may have columns:
task name, assigned to, estimated hours, real hours, done (%)
Google Drive spreadsheet may be the answer.
This is a very simple and fast method which distracts developer(s) minimally.
Scrum: For medium to large projects or team of developers
Tasks and the Scrum information are recorded and kept on a board in the office and/or a special Scrum application can be used. A good web Scrum application is Pivotal Tracker which I would recommend for any size of project or team.
More about Scrum:
http://en.wikipedia.org/wiki/Scrum_(development)
In both cases, the product owners (clients or those who deal with clients), project managers and all developers can better and faster:
communicate
estimate
see progress of the team and all individuals involved

Single Person Application Development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Hey all. I would like to get some insight on a question that I have been trying to find some information about. If you are the solo developer that is building a project from ground up, how do you manage the project? In the past, I have worked on a few personal projects that have grown into fairly large projects. In almost all of those projects, I have tried to wear the hats of all the roles that would normally be in place during a normal software development project (i.e. Product Owner, developer, architect, tester, etc.). It seems that when I leave the project for some time and come back, it is extremely hard to get back into the rhythm of what I was doing. So with that, I have some questions:
If I know the requirements (at this
current time), do I record them
anyways? If so, how do I go about
doing this, and how do I manage these
requirements? Product backlog,
features list, etc?
If this is the case, are full blown product backlogs or use cases a little overkill?
How does one efficiently appropriate
his/her time to each respective role?
What would be a normal flow of events
that one would follow? Start coding
immediately, write down user
stories/use cases, then go into
OOA/D?
What diagramming/modeling would be sufficient for this level? Domain model, class diagram, etc?
Basically, I was curious how everyone out there in the SO community would go about developing a project from inception to deployment when you are the lone, solo developer. What steps, documentation, and other project related activities are needed to help bring this project from an impractical, hobby project to something more professional? Any help, references, or suggestions would be greatly appreciated. Thanks in advance.
The most difficult part, I have found, about developing solo is that it's just tough to keep yourself driving forward. Even if you're doing this to make a living (AKA, running your own software business), unless you have pressing needs (AKA, you're going to starve if you don't make money) it can be difficult to sit down and just code.
From your perspective, I would recommend following good software practices where it makes sense to. For example, if I were a solo software developer, I would have no reason to create a collaborative development environment. All I really need is an SVN server, my IDE, and a place to record documentation (might setup a wiki or a website or something). I would personally create a realistic schedule to follow and would work on sticking to that.
As for level of effort of documentation, that really depends on you and the product you are developing. For example, I would definitely recommend recording your requirements. Unless your product is trivial, there is no way you'll remember them all and why you wanted certain ones over others. Managing a full backlog, however, can be a job in and of itself. In the solo programmer case this may not make sense.
Basically, the point I'm trying to get across (and should be followed with every project - not just in this case) is have just enough management that makes sense. The rest should be focused on the work and the development of the product.
Something else you may want to look into is reading this - Agile Programming Works for the Solo Developer. There are other, similar, articles out there. Might give you some good thoughts.
If I know the requirements (at this
current time), do I record them
anyways? If so, how do I go about
doing this, and how do I manage these
requirements? Product backlog,
features list, etc?
I have two lists of features:
A high-level view which states the scope of the finished product
A list of the features which I'm implementing in this iteration
Because I don't need to communicate it to other people (yet) I tend to write down the things that I don't know about the project (if I already know it there's no need to write it down): it's when it gets too complicated, or when there are details which I haven't defined but need to define, that I start to define them in writing.
I did however try to investigate/make a business-case for the project before starting coding.
How does one efficiently appropriate
his/her time to each respective role?
I did non-programmer, product-owner thinking at times when I had to be away from the computer anyway.
Apart from that, my cycle is:
Implement more functionality
Integration-test it
[repeat as above]
Every 3 to 6 months I compare the new-functionality-accomplished against my estimated schedule, and then recalibrate: i.e., make a new list of the highest-priority features to implement in the next few months.
What would be a normal flow of events
that one would follow? Start coding
immediately, write down user
stories/use cases, then go into OOA/D?
I started with working part-time or in my spare time, to make sure that I had:
Understood the required functionality
Made significant architectural decisions
Written any throw-away prototypes as necessary to learn new technology
After that I was ready to start developing full-time.
What diagramming/modeling would be sufficient for this level? Domain model, class diagram, etc?
I'm not using diagrams at all (except for sketches of the UI). By structuring the code, and refactoring, I'm able to know/remember/rediscover/decide which software components implement what functionality.
It seems that when I leave the project
for some time and come back, it is
extremely hard to get back into the
rhythm of what I was doing.
You need to comment your code more. If you leave the code, come back in two weeks, and can't remember how the code works, you need more comments.
If I know the requirements (at this
current time), do I record them
anyways?
Yes, for the same reasons stated above.
how do I manage these requirements?
A feature list is OK, provided you have enough detail in each feature to jog your memory.
How does one efficiently appropriate
his/her time to each respective role?
Break down each feature into smaller and smaller tasks, until you feel like you can do each task in a half day or less.
What would be a normal flow of events
that one would follow?
That depends on your development style. In general I would follow a clear but simple architecture, avail yourself of software patterns where practical, and provide adequate unit tests for your code as you go.
What diagramming/modeling would be
sufficient for this level?
Sufficient diagramming/modeling to make the project clear in your head.
What steps, documentation, and other
project related activities are needed
to help bring this project from an
impractical, hobby project to
something more professional?
Other than what I have already mentioned, make sure you have a good source control system and daily backups in place.
Good luck!
If you believe there is a chance that you're going to work on the project for some amount of time, leave it, and then come back to it at a later date...your best bet is to treat the documentation for the project the same as if you were working with a large team.
That means documenting requirements (even if they're from yourself), writing use cases (if functionality is going to be complex, otherwise some other form of documentation could suffice), and some level of UML diagraming (or other domain specific diagram) which could include activity diagrams/class diagrams/etc.
That way, when you leave the project for some amount of time, you can come back to a well documented idea and pick up where you left off.
As a side note, I try to do the majority of those things no matter what...that way if I ever find somebody interested in working on the project with me, I can get them up to speed quickly and get them on board with my ideas.
This is how I work, YMMV:
Keep a spreadsheet for high level of everything - list of your projects, and some top-level items/todos/reminders
Create a "project" folder for each product/project you have or work on, and create a strucuture to contain documentation and code for the project.
Keep a top-level "catch-all" document for each project, in the root of this folder. Keep you ideas, research, notes etc in this doc.
Then if you want to get organized, keep an MS project file (or similar) and plot out timelines for the various steps in each project. This is good for tracking progress on each project and make sure you arent forgetting anything. Basically keeps you honest with yourself.
And if you need to track progress on project work you are doing for clients, I understand Basecamp is a good solution for this. I am currently evaluating it for my own company. See www.basecamphq.com
Even as a solo developer, you should document at least the overall features of your project, and then the requirements for the particular feature you are working to complete, and then maybe produce a short pseudo-code for the functionality you're currently working on.
That way, if you do end up breaking away from that project, you can get back to it and see where you're up to easily enough. It's also pointless getting too far ahead of yourself with details for this same reason.
It's also a neat motivational tool for a solo developer - getting through and ticking things off is a way to show progress - something that you can start to feel you're not making when you're chewing through a couple of thousand lines of code and it seems like you're still miles away from actually having 'module x' completed.
Lastly - with regards to code comments - I at least try and fill out what actions/behaviour a new function should have in an outline, and then write the code in between the comments. Also, it is useful having plain English explanations of why you're branching in an if/else to support the logic in the condition...
I belive that better results in solo development one can achive with appropriate tools support and tasks that compensate lack of ohers people and help to organize working time. Any tool that generate metada with minimal create time cost describing your software is helpful.
VCS and tools for tracking user actity/code changes history - very important is to add good commit messages
mind-mapping tools for storing project related data (e.g. XMind), blacboard is useful too :)
time tracking tools (e.g. Toggl.com)
write a lot of acceptance test and use acceptance testing frameworks
Of course these clues also fits in non solo development :)
As a lone developer, I've found that your time is very expensive. This means that you have to balance sustainability and momentum - even though you are just one guy, you have to do things so that the you six months from now can go back and look at old stuff without wasting time, without spending so much time maintaining the systems that it compromises your flow.
Your question suggests that you are thinking in terms of fairly heavyweight tools and processes, but the 80/20 rule applies - for example, you can nail documentation well enough by TDD, using the doc tools of your platform to generate API docs, plus a Wiki for specs, lists, etc.
In that vein, I would suggest that you choose your platform carefully. The question about modelling suggests that you are using a platform that produce a lot of code and artifacts, but you may be able to get most of the functionality for much less management overhead elsewhere. Today I'm working on a .NET Web app that I wrote "the right way", but now realize that I could have delivered the same functionality much more efficiently in this case by using PHP with a PHP MVC framework to keep a clean structure.
Specific tools that I'd recommend:
A distributed version control system (much less overhead than centralized)
The most lightweight platform that you can use that has good tooling
A Wiki to easily capture and maintain small and large bits of content
Whatever testing framework that you can use, right from the start of the project
A lightweight TODO list system that you can access from anywhere
I used to work on a very small team (one dba and one C# developer). Even then I found it very useful to have written requirements, formal tests, source control and bug tracking (we used bug tracking for our features as well as bugs). It helped us to not forget anything and a year later when you were doing maintenance, you had something to research though to help you undersatnd what you did. Plus when the two of us left (as most people eventually move on) there was documentation there for the next person.

Project Termination [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was recently working with a team to develop an online system. We had worked for several months and were making good progress when the project got canned. We all felt strongly that the projects completion was important and that it would have great outcomes on our consumers productivity. After being frustrated for a while I thought I should ask some people with more experience.
What is the best way to deal with the frustration of a canned project and move forward so that it doesn't hold future possibilities back?
On a well designed project, some of the code you developed can be reused in future projects, making it worthwhile. Even if you can't use any of it however, you and your team probably gained valuable experience that will help in the future as well. Think of it as an expensive team exercise.
Don't put your heart and soul into someone else's project?
I do a lot of work for different people and while some projects are more interesting than others they're not my projects so I wouldn't be too broken up if they got canned. I've got my own stuff I'm working on. No one can terminate those projects but me.
Grieve. Such a loss will produce a grief reaction. Not one as strong as though you had lost a loved one, but it's a grief reaction nonetheless, complete with all those stages of grief.
Failure is the best (and sometimes only) way to learn new things, even if the failure is not your fault. There are many different angles by which you can salvage useful information from this:
Code that is reusable
New technologies or skills garnered from the project
Lessons about project management based on how the failure was handled (maybe the project should have been canceled much sooner, before the team bought into it)
Non-technical ideas that you can reuse in other projects for the company or even in your own endeavors.
I highly recommend doing a postmortem, but don't dwell. Most projects get canned at some point in their cycle and if you let it affect your morale, it becomes a downward spiral from which it's hard to recover. You may become oversensitive to even slight requirements changes.
Attack every project as though it were your own. By this I don't mean invest all of your emotions (as stated here already by Spencer Ruport). But write all your code and organize all your code in a manner that you can easily pull out tools that you might need in the future. You never know if you will need it...but odds are you will. If you write an account manager app...do it in a modular reuseable fashion. If you write an image uploader...write it in a way that it can be ported to any other project you have. Write helper functions around all of your major features to make it more user friendly down the road.
This of course requires some planning prior to losing the gig! No worries. It rarely is because of you that you (the whole team) loses the gig. Some financial decision or business decision is usually at play. In this case most likely the economy is what killed you. In the case that you don't have any physical benefits to the failed project...look at it as a learning experience. Inevitably...no matter how good you are...you probably had something that you did that you don't or no longer agree with. Learn from that. You most likely also did something very cool that you loved. BLOG ABOUT IT! This serves two purposes..you just created something tangible from the project...and you put it somewhere that you won't forget about it.
Sucks all the way around. But at least there is a great market out there right now! Contact me directly if you want my headhunter list (80 technical recruiters in CA and the US).
Two things:
Your Investment in the Project & Code: The fact your team had such strong feelings for the project & were so frustrated on it being canned is a good sign - it means you are a true developer/programmer and are not just doing a half-job for full-pay. So to deal with the project being canned: know you & your team are committed to your work & while that project may not have panned out, you guys sound like a real credit to that project & any other you may work on. It sounds like you just need to find a project/opportunity that has the legs.
My Experience: Projects get canned for all sorts of reasons - budget, lack of confidence from stakeholders, too late to market, changed scope etc. I would enquiry/investigate why your project was canned. If it is budget or lack of stakeholder confidence then it is really good news. It means an opportunity has just presented itself to you & your team. Consider pursuing it!
Either way your team will have grown from the experience: both technically & from a business perspective.
cash the paycheck - that always helps ;-)
ask if you can have the rights to the canned project, since they don't want it, then open-source or commercialize it yourself if you think it's worthy
it's good to care about your work; it's not so good to obsess over it.
there will be other projects even better than that one in the future; they might also get canned, for any number of reasons both rational and irrational
Good example: I once worked with a lady who spent 2 years on a document-imaging project that was canned a few days before it was supposed to go live; it was canned because the new manager did not like the old manager, and the project was his "pet". This lady's reaction: "I'm looking forward to learning something new!"
This can be used to bring your team closer together, if you have the right sort of people. There is nothing quite like working hard on something you believe in and then having it canned. It can depress, but it can also motivate people to want to prove next time that they can do the job, that they had the right idea.
It helps to galvanize the team; we were there, we worked hard, and it was taken from us.
Of course, it's better not to be in that situation to start with, but when you find yourself there use it to build the team.
Sunk cost cannot be used as a reason for the continuance of a project. If the leaders have made a business decision then I'm sure that it is well motivated, however upsetting.
I'd console yourself in that big swings should be celebrated in business, big companies do not win every bid and complete every project they start. So console yourself in having lost once, maybe you might be able to change the way things were done, or focus more on the project stakeholders as well to make sure they understand why your project is worth completing compared to the other projects and business initiatives at the company.
I'll finish with my favourite saying:
"Good judgement comes from experience. Experience comes from bad judgement."
Learn from it!
Watch a Rocky movie (the last one was good) and have a few beers. There's no way not to put yourself into a project, there's no way to not feel bad about a project being terminated or failing, there's no way not to feel negative about the company. What makes a good programmer better is taking all the emotions, anger, etc. and being able to release it and move on with the same focus and dedication that was there with the first project. All part of life and all part of working in IT.

Policy for fixing broken nightly builds [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I guess everybody agrees that having continuous builds and continuous integration is beneficial for quality of the software product. Defects are found early so they can be fixed ASAP. For continuous builds, which take several minutes, it is usually easy to find the one who caused the defect. However, for nightly integration tests, which take long time to run, this may be a challenge. Here are specifics of the situation, for which I'm looking for an optimal solution:
Running integration tests takes more than 1 hour. Therefore they are run overnight. Multiple check-ins happen every day (team of about 15 developers) so it is sometimes difficult to find the "culprit" (if any).
Integration testing environment depends on other environments (web services and databases), which may fail from time to time. This causes integration tests to fail.
So how to organize the team so that these failures are fixed early? In my opinion, there should be someone appointed to DIAGNOSE the defect(s). This should be the first task in the morning. If he needs an expertise of others, they should be readily available. Once the source (component, database, web service) of the failure is determined, the owner should start fixing it (or another team should be notified).
How to appoint the one who diagnoses the defects? Ideally, someone would volunteer (ha ha). This won't happen very often, I'm afraid. I've heard other option - whoever comes first to the office should check the results of the nightly builds. This is OK, if the whole team agrees. However, this rewards those who come late. I suppose that this role should rotate in the team. The excuse "I don't know much about builds" should not be accepted. Diagnostics of the source of the failure should be rather straightforward. If it is not, then adding more of diagnostics logging to the code should improve the visibility into integration test failures.
Any experience in this area or suggestions for improvements of the above approach?
A famous policy about broken nightly builds, attributed to Microsoft, is that the guy whose commit broke the build becomes responsible for maintaining nightly builds until someone else breaks it.
That makes sense, since
everyone makes mistakes, so the necessary rotation will occur (empowered with Least-Recently-Used choice pattern for ambiguous cases)
it encourages people to write better code
What I generally do (I've done it for a team of between 8 and 10 persons) is two have one guy that checks the build, as the first thing he does in the morning -- some would say he is responsible for QA, I suppose.
If there is a problem, he's responsible for finding out what/how -- of course, he can ask help from the other members of the team, if needed.
This means there's at least one member of the team that has to have a great knowledge of the whole application -- but that's not a bad thing anyway : it'll help diagnose problems the day that application is used in production and suffers a failure.
And instead of having one guy to do that, I like when there are two : one for one week, the other for the second week -- for instance ; this way, there are greater chances of always having someone who can diagnose problems, even if one of them is in holidays.
As a sidenote : the more useful things you log during the build, the easier it is to find out what went wrong -- and why.
Why not let everyone in the team check the build every morning ?
Well, not every one wants to, first of all -- and that will be done better if the one doing it likes what he does
And you don't want 10 people spending half an hour every day on that ^^
In your case I'd suggest whoever is in charge of the CM. If that is the manager or technical lead who has too many responsibilities why not give it to a junior developer? I wish someone had forced me early in my career to get to know source control more thoroughly. Not only that but looking at other people's code to track down a source of error is a real skill building or knowledge learning exercise. They say you gain the most from looking at other people's code and I'm a firm believer of this.
Pair experienced with unexperienced
You may want to consider having pairs of developers diagnose the broken builds. I've had good luck with that. Especially if you pair team members who have little familiarity with the build system and team members who have significant familiarity. This may reduce the possibility of team members saying "I don't know much about builds" as a way to try and get out of the duty, and it will decrease your bus number and increase collective ownership.
Give the team a choice of your assigned solution or one of their own making
You could put the issue to your team and ask them to offer a solution. Tell them that if they don't come up with a workable solution, you will make a weekly schedule, assigning one pair per day and making sure that everyone has the opportunity to participate.
Practice continuous integration so you don't need infrequent mega-builds
** you can distribute builds between machines if it's too slow for one machine to do
Use a build status monitor so that whoever checked something in can be made responsible for build failures.
Have an afternoon check-in deadline
Either:
Nobody checks-in after 5pm
or
Nobody checks-in after 5pm unless they're prepared to stay at work until their build passes as green - even if that means working on, committing a fix and waiting for a rebuild.
It's much easier to enforce and obey the first form, so that's what I'd follow.
Members of a former team of mine actually got phoned up and told to return to work to fix the build... and they did.
I'd be tempted to suggest splitting things up in either of a couple of ways:
Time split - Assuming that the tests could run twice a night, why not run the tests against the code at 2 different time points,i.e. all the check-ins up to X p.m. and then the remainder, so that could help narrow down where the problem is.
Team split - Can the code be split into smaller pieces so that the tests could be run on different machines to help narrow down which group should dig into things?
This assumes you could run the tests multiple times and divide things up in such a way so it is a rough idea.
We have a stand-up meeting every morning, before starting work. One of the things on the checklist is the status of the nightly build. Our build system spits out an email after it's run, reporting the status, so this is easy to find out - as it happens, it goes to one guy, but really it should go to everyone, or be posted onto the project wiki.
If the build is broken, then fixing it becomes a top-priority task, to be handled like any other task, which means that we'll decide at the standup who is going to work on it, and then they go and do so. We do pair programming, and will usually treat this as a pair task. If we're short-staffed, we might assign one person to investigate the breakage, and then pair someone with him to fix it a bit later.
We don't have a formal mechanism for assigning the task: we're a small team (six people, usually), and have collective code ownership, so we just work it out between ourselves. If we think one particular pair's checkin broke the build, it would usually be them who fix it. If not, it could be anyone; it's usually decided by seeing who isn't currently in the middle of some other task.

Resources