Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I spent a couple of hours to find any up-to-date figures regarding the share of software development methodologies such as Waterfall, RUP or Scrum but could not find any useful information. Is there anybody who knows about such surveys? The corresponding document does not need to be freely available, but as a matter of course I would appreciate it.
Thank you very much!
Seb
Couple of the documents I have on hand to help you on your research.
THE INFLUENCE OF ORGANIZATIONAL STRUCTURE ON SOFTWARE QUALITY: AN EMPIRICAL CASE STUDY
Nachiappan Nagappan
Microsoft Research
Redmond, WA, USA
nachin at microsoft.com
Brendan Murphy
Microsoft Research
Cambridge, UK
bmurphy at microsoft.com
Victor R. Basili
University of Maryland
College Park, MD, USA
basili at cs.umd.edu
In Proceedings, International Conference on Software Engineering, 1999, Los Angeles, CA, pp. 85-95
Splitting the Organization and Integrating the Code:
Conway's Law Revisited
Debugging the Development Process
Managing Humans - Biting and Humorous Tales of a Software Engineering Manager
I believe you will find most software developed for business systems follows iterative development cycles with a rough methodology similar to SCRUM even though most wouldn't have realized it.
The only times you will ever see a static methodology such as Waterfall would be in most likely a large government project that requires every single technical and business design document to be completed and approved before any type of software development begins.
Since you are willing to spend money, you could turn to a professional analyst firm like Gartner Research. They generate tons of reports and you might find something in their archives. Major corporations often cite studies by Gartner.
If that does not yield any results, you should do a search in research papers. Google Scholar might help you there.
If all else fails, and you have enough time on your hands, you could perform a small study yourself: Pick random companies and tell them you are doing research and that you would like to ask them a few questions.
If such a thing existed...
There would be standards based on the results. If anywhere close to 50% of shops actually used Scrum or RUP or anything, there would be an applicable standards organization pounding out the details.
We'd all be told specifically what to do based on the results. Our lawyers and accountants would ask why we're using a methodology only used by 15% and not a methodology used by 28%. We'd have to contend with armchair generals quoting the results at us.
There would be products for sale based on the results. "Supporting the most popular methodology." "One of the most popular methodologies." "Trouble-tickets for the fastest growing methodology."
You'd see advertising that quoted the results and claiming specific quantitative benefits. "28% of organizations use our version of Scrum with improved on-time delivery."
Ever see any advertising or standards based on adoption of a methodology? Anything?
Such quantitative studies probably don't exist.
Also, a precondition for counting is definition. Can you define Scrum in a way that it's somehow different from XP? I doubt it.
I think this kind of data cannot possibly exist. It requires far more formality and standardization than are even remotely possible for something so complex as software development.
I don't think you will find reliable data on what you're looking for. I've been looking for that kind of figures for a few years and I haven't found them.
First of all, very few organisations tell you what method they are using. Some just don't use any. Some other don't know what they use, or what to call it. And some know what to call it, but won't disclose it for whatever reasons. Of the organisations that will tell you, which are (in my experience) a minority, there's a big assymetry in how they characterise what they tell you. The way in which your own question is worded illustrates this: most industry people (and many academics) today, when asked to list methodologies, think of waterfall, RUP, Scrum, XP, and a few other "trademarked" agile approaches. It is interesting, but they are perfectly capable of citing a number of agile approaches, the differences between which are usually much smaller than the differences between (almost forgotten) methods that are bunched together under "waterfall". Agile approaches are so heavily marketed and hyped that, like Coca-Cola or McDonald's, are so present in our daily lives.
Methodologies are often presented as either waterfall or agile. That is a terrible fallacy, fostered by the agile community. There are successful methodologies that do not qualify as waterfall and predate (and do not qualify as) agile. However, they seem to be ignored, and they rarely surface on surveys such as the one you demand in your question. Very rarely I find people in industry reporting to use methods such as Catalysis, OPEN/Metis or Fusion.
(Note: Don't misunderstand me; I appreciate the value and contributions of the agile movement. But I am no raving fan; I am a researcher who tries to make an objective assessment.)
In summary, I don't think you'll find a study with data that answers your question. But, in your search, I suggest you take into account these comments.
Good luck. :-)
maybe not sound helpfull, but don't give to much to buzzwords. good programmer/software engineers with an sense/instinct what needs to be done you need. most of these proceses where invented because fearfull programmer sticked to closely to one of these pradigmes and the car went against the wall and some guy rightfully pointed out what they missed. but that can happen with most strategies if you don't see you situation in which you developing as a whole.
the more recently hyped methods like XP i don't see in you list. they work well even in smaller teams. :)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
It is a well known fact that IT projects fail with an alarming rate (some surveys suggest that the failure rate is more than 60%). Typically, project managers try to "recover" from these failures either by squeezing their resources to work extra hours or by compromising the quality of the deliverables (reduce testing effort, reduce scope etc.). Unfortunately, software quality is not deemed as very important by the business leaders.
I wonder if this is true about other professions as well ? How are projects managed, for example, in the construction industry where the cost of failures is very high and where a single mistake can be catastrophic ? Mega engineering projects like the Eurotunnel and Petronas towers required thousands of people and billions of dollars to construct and yet most of these projects were completed successfully within or sometimes even before time.
Are there some lessons we can learn on how projects are planned and managed in other industries ?
I think the biggest difference is that they would never consider starting a project with the same kind of shoddy requirements we are given. Maybe we should stop doing so as well and force people to actually define what they want before we start trying to code it.
I wanted to add that we as an industry to a lousy job of pushing back with a new timeline and budget when the requirements (such as they are) change. We started to do much more of this pushback here, telling the customer how many more hours (and more money) the requested change would take, adding the two extra days to do the the exisitng deadline and making them formally put in a request for change. The number of requested changes to intial requirments dropped drastically once we insisted there would be a cost for the change. This change also moved us from a cost center to a profit center in the company as we were doing a lot of extra work but not charging the customer more than the intial estimate.
Let's take a bridge as an example, and compare it with software.
The bridge will have fewer external specifications. It will have some pretty exacting specifications, but a lot of those will be internal (such as material strengths).
It will be designed by people who know that bridge design is not to be excessively rushed. In general, civil engineers will get more respect from their management than software developers. The civil engineers will in addition be working in a much more constrained problem space. There aren't nearly as many ways to make a bridge as an inventory system.
When the design is done, one or more licensed professional engineers will sign off on it. This is accepting real responsibility. (Alternately, no PE will bet his or her license on its soundness, and the design won't go anywhere.) This doesn't happen in software, partly because the problem space is so unconstrained.
Finally, the bridge will be built, and this will take months and a lot of heavy equipment. Software will be built initially with a compiler and reproduced indefinitely with cheap tools. There is a great psychological significance here: people tend to think of projects as having significant design and significant manufacturing stages, and if manufacturing is too trivial tend to think of part of the design as manufacturing.
If software were to be more like civil engineering, we'd need standard practices, adequate and reliable, for most things. We'd need engineers to study those practices, and be willing to certify that software either was or was not designed properly, and in fact we'd need projects done according to those practices to be almost completely reliable. We'd need more formal assumption of responsibility there. We'd need more external respect, because managers that will not dare throw away a $10 million construction project by meddling will often have no qualms about messing up a $20 million software project.
In short, software is too immature a discipline to work like civil engineering.
A lesson that can be learned from engineering: respect the non-functional requirements.
Functional requirements are hard enough, as they expect the users, developers, analysts and anyone else involved to be able to define what is needed to do business.
Non-functional requirements are the things engineers and technical people come up with that functional people may be blind to. It's very easy to overlook or downplay the non-functionals. Some PMs I've worked with in the past did not want to hear about non-functionals because they couldn't directly be tied to business needs and introduced tasks that would threaten the metrics of "on time and on budget".
Example
Functional requirement: Luxury ship must be able to carry X amount of passengers
Non-functional requirement: Ship big enough to carry X amount of passengers needs to have hull Y units thick to sustain integrity even upon impact with icebergs
Counting the Cost
Why don't software PMs respect non-functional requirements? Because the cost for such disrespect is different for engineering than it is for software development.
Cost of disregard for non-functional requirements in my ship example: loss of life.
Cost of disregard for non-functionals in software: some wimpy thing called technical debt, which will later be paid for by people long removed from the project team and the project team's completion bonuses.
Granted my examples are simplistic. Not all engineering foibles lead to death while some faulty software certainly can (avionics, medical, or traffic control systems being a few examples). But I think you get the idea.
There is a difference between software development and engineering or contruction industry - a specification always can be changed.
Walking on water and developing
software from a specification are easy
if both are frozen.
-- Edward V Berard (from here)
You can find more detailed explanation of differences in approach to project management at the beginning of the "Extreme Programming Applied" book.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
This book was written in the era of time sharing systems, procedural programming, and about 30 fewer years in software engineering experience. With the improvement of things such as existing libraries, higher level languages, IDES, and the amount of documentation and examples available on the internet how much of the book still holds true?
While I can believe that adding new people to a project may initially slow it down I would think things such as unit testing, separation of concerns, and other forms of automation and design improvements would allow new members of a team to become productive faster then assumed in the book, assuming the project had good design documentation and processes in place.
I don’t have experience on large projects or with large teams so am interested to hear what those of you who do have experiences with them think.
edit:
I was wondering if new communication tools such as Wikis, instant messaging, and the internet in general decreased the time spent communicating. Based on everyones answers I would say that any increase in communications efficiency has been offset by increased complexity.
It is still as true today as the day it was written. This is because it is fundamentally about communication between people working on the same project, and none of the advances of the past 30 years have substantially changed that.
Of course, we have learned a lot in those 30 years, but all improvements in our tools and undertanding have been incremental, in accordance with Brooks' "no silver bullet" prediction.
Isn't this kind of like asking if Sun Tzu's Art of War is still applicable to warfare since we have modern equipment?
The book still has things to tell us, and I for one have experienced the problems in communication that increased team sizes bring. You should be aware that unit tests, separation of concerns etc. are not new concepts.
However, some things have not stood the test of time. I don't believe that writing ASCII flow charts in your code is a good idea, and the "surgical team" approach suggested has been tried by several people (Charles Simony at MS, most famously) and found not to work too well.
The idea isn't that "large teams don't work", it's that "throwing people/money at the problem isn't the answer". Things like unit testing, separation of concerns, etc. are doing other things rather than just throwing people at the problem. These other things allow you to carefully add more people in the right place to speed things up. If anything, the points you make support the ideas of the book.
Both of the famous Brooks writings, "No Silver Bullet" and "The Mythical Man-Month" are explanations of fundamental limitations, in programming languages and project management respectively.
While true that some of the chapters a bit farther than halfway through TMMM deal too heavily with the technology of the time, the remaining chapters are still as relevant today as they were when written.
In TMMM, Brooks follows a technique of "outline the problem", "show some false starts", and "propose my own solution". Some commentators above has pointed out that his own solutions may be considered outdated at this point, but his concise description of the problems inherent in large projects make the book worth reading.
One theme he keeps coming back to is communications overhead as a limiting factor for large teams. As a thought experiment, consider the effect of the Internet as a communications medium for programmers, and the catalyst that has been to large open source projects.
Personally, I would read the book just for the "The Joys of the Craft" section. I've never read anything that so elegantly describes what programming at it's best feels like.
(If you're curious, it's on page 7, and viewable in the amazon.com "Look Inside!" feature)
I certainly think things like "No Silver Bullet" are just as applicable today as they were decades ago, especially as we see more and more young people come into industry and think x is the latest and greatest killer language/technology and all the other technologies will die because of it.
Granted, the references to Ada or sharing computers are antiquated, but the concept of accidental and essential difficulties, buy vs. build, how code is complex by definition because we don't repeat parts, and all the other theoretical topics are still completely accurate and relevant.
The other argument for why TMMM is relevant is that, it's not really about software itself but about how programmers get things done. In this way, it's hard for it to become obsolete.
The two that stick out in my mind: "version 2" still applies and so does "adding more people not necessarily faster".
Spolsky discusses "version 2" in his own way. I do not recall if he specifically links to MMM but it is very similar in concept.
Communication has become a lot more efficient than when MMM was identified, however, I think it's all proportional. It takes a lot more to make software production ready than it did when MMM was written.
Someone said that everything in computer science was discovered in 1960 something and everything since then has been derivative.
Read TMM as a book outlining a problem, perhaps THE problem, in software engineering: its not the technology, its the people! All the improvements you mention spring from that core realization. They are all in place to solve the problems Brooks laid out. This is the book that I'm sure Kent Beck and Ward Cunningham and Alister Cockburn and Martin Fowler all read, took to heart and then began to craft their silver bullets.
I consider this to be one of the "must read" books for anyone who wants to understand the software development process.
The demand for development workforce has been growing rapidly last 40 years, and this need will not stop. Since the rate of clever (vide Joel’s “smart and get things done”) people in the population remains generally the same, educating more and more developers each year means that the average smartness of a developer is getting lower.
40 years ago, it was demigods who became developers; 20 years ago, it was a job for clever people, while now when I look at young CS students at my Alma Mater it seems that they are taking everyone who knows what a computer is.
This does not mean a disaster is coming – western world keeps importing smart people (or outsourcing work) from emerging markets or third world countries. The new development tools make it easier to develop good applications. Those factors seem to neutralize each other, making MM-M eternal true.
The Mythical Man Month is a very dated read, but the core truths still apply. Sure Brooks discusses the need for a secretary which is clearly not true today and his concept of a surgical team doesn't work well, but most of the book is still accurate. His insight that communication requirements increase along with the size of the team is still true. His observation that adding people to a late project makes it later has been born out by a lot of projects. Unit tests help a little, but they don't stop one from misunderstanding the code or asking a lot of questions. No Silver Bullet also continues to stand the test of time.
All of it. The simple fact is that software projects are nontrivial; we build our own domain knowledge, really, directly into our solutions. Domain knowledge transfer is costly, both for the transferrer and for the transferee; this has not changed. And I, for one, believe it never will, no matter what the practices and tools used. Things may get marginally better, but the simple fact is that teaching and learning are both expensive and difficult things, and there's just no way to avoid them.
The social factors are still present, because humans are still essentially the same beasts we were 50 years ago.
The technical examples are almost completely obsolete, and only make sense when you think about the 0.034 MIPS System/360 of 1964. When you've only got 8 KB of memory, suggesting that the user should be responsible for handling leap years, instead of wasting 26 bytes of system memory (as Brooks did) made sense, but today it seems downright silly. I don't know any system that small today -- your telephone is thousands of times more powerful than the most powerful OS/360 system. Today we know a lot more about usability and human-computer interaction, and making the user responsible for that category of thing is just crazy.
One programmer can now write more code/build more software than one programmer could back then, but adding the second developer is not going to produce twice as much.
If/when I get on a project with good design documentation and processes in place, I'll let you know if that improves anything.
If you think of a large project that is late, it is then most likely in crisis/panic mode, as with most things in this mode, the best answer is a half decent plan, simply throwing more resources at a crisis doesn't solve anything other than waste resources and compund the problem.
There is no substitute for scheduling (pronounced shed-uling), planning and decent management.
As with most of these "one liners" or "golden rules", consider them more guidelines (with context) than set in stone.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What are useful strategies to adopt when you or the project does not have a clear idea of what the final (if any) product is going to be?
Let us take "research" to mean an exploration into an area where many things are not known or implemented and where a formal set of deliverables cannot be specified at the start of the project. This is common in STEM (science (physics, chemistry, biology, materials, etc.), technology engineering, medicine) and many areas of informatics and computer science. Software is created either as an end in itself (e.g. a new algorithm), a means of managing data (often experimental) and simulation (e.g. materials, reactions, etc.). It is usually created by small groups or individuals (I omit large science such as telescopes and hadron colliders where much emphasis is put of software engineering.)
Research software is characterised by (at least):
unknown outcome
unknown timescale
little formal project management
limited budgets (in academia at least)
unpredictability of third-party tools and libraries
changes in the outside world during the project (e.g. new discoveries which can be positive - save effort - or negative - getting scooped
Projects can be anything from days ("see if this is a worthwhile direction to go") to years ("this is my PhD topic") or longer. Frequently the people are not hired as software people but find they need to write code to get the research done or get infected by writing software. There is generally little credit for good software engineering - the "product" is a conference or journal publication.
However some of these projects turn out to be highly valuable - the most obvious area is genomics where in the early days scientists showed that dynamic programming was a revolutionary tool to help thinking about protein and nucleic structure - now this is a multi-billion industry (or more). The same is true for quantum mechanics codes to predict properties of substances.
The downside is that much code gets thrown away and it is difficult to build on. To try to overcome this we have build up libraries which are shared in the group and through the world as Open Source (but here again there is very little credit given). Many researchers reinvent the wheel ("head-down" programming where colleagues are not consulted and "hero" programming where someone tries to do the whole lot themself).
Too much formality at the start of a project often puts people off and innovation is lost (no-one will spend 2 months writing formal specs and unit tests). Too little and bad habits are developed and promulgated. Programming courses help but again it's difficult to get people doing them especially when you rely on their goodwill. Mentoring is extremely valuable but not always successful.
Are there online resources which can help to persuade people into good software habits?
EDIT: I'm grateful for dmckee (below) for pointing out a similar discussion. It's all good stuff and I particularly agree with version control as being one of the most important things that we can offer scientists (we offered this to our colleagues and got very good takeup). I also like the approach of the Software Carpentry course mentioned there.
It's extremely difficult. The environment both you and Stefano Borini describe is very accurate. I think there are three key factors which propagate the situation.
Short-term thinking
Lack of formal training and experience
Continuous turnover of grad students/postdocs to shoulder the brunt of new development
Short-term thinking. There are a few reasons that short-term thinking is the norm, most of them already well explained by Stefano. As well as the awful pressure to publish and the lack of recognition for software creation, I would emphasise the number of short-term contracts. There is simply very little advantage for more junior academics (PhD students and postdocs) to spend any time planning long-term software strategies, since contracts are 2-3 years. In the case of longer-term projects e.g. those based around the simulation code of a permanent member of staff, I have seen some applications of basic software engineering, things like simple version control, standard test cases, etc. However even in these cases, project management is extremely primitive.
Lack of formal training and experience. This is a serious handicap. In astronomy and astrophysics, programming is an essential tool, but understanding of the costs of development, particularly maintenance overheads, is extremely poor. Because scientists are normally smart people, there is a feeling that software engineering practices don't really apply to them, and that they can 'just make it work'. With more experience, most programmers realise that writing code that mostly works isn't the hard part; maintaining and extending it efficiently and safely is. Some scientific code is throwaway, and in these cases the quick and dirty approach is adequate. But all too often, the code will be used and reused for years to come, bringing consequent grief to all involved with it.
Continuous turnover of grad students/postdocs for new development. I think this is the key feature that allows the academic approach to software to continue to survive. If the code is horrendous and takes days to understand and debug, who pays that price? In general, it's not the original author (who has probably moved on). Nor is it the permanent member of staff, who is often only peripherally involved with new development. It is normally the graduate student who is implementing new algorithms, producing novel approaches, trying to extend the code in some way. Sometimes it will be a postdoc, hired specifically to work on adding some feature to an existing code, and contractually obliged to work on this area for some fraction of their time.
This model is hugely inefficient. I know a PhD student in astrophysics who spent over a year trying to implement a relatively basic piece of mathematics, only a few hundred lines of code, in an existing n-body code. Why did it take so long? Because she literally spent weeks trying to understand the existing, horribly written code, and how to add her calculation to it, and months more ineffectively debugging her problems due to the monolithic code structure, coupled with her own lack of experience. Note that there was almost no science involved in this process; just wasting time grappling with code. Who ultimately paid that price? Only her. She was the one who had to put more hours in to try and get enough results to make a PhD. Her supervisor will get another grad student after she's gone - and so the cycle continues.
The point I'm trying to make is that the problem with the software creation process in academia is endemic within the system itself, a function of the resources available and the type of work that is rewarded. The culture is deeply embedded throughout academia. I don't see any easy way of changing that culture through external resources or training. It's the system itself that needs to change, to reward people for writing substantial code, to place increased scrutiny on the correctness of results produced using scientific code, to recognise the importance of training and process in code, and to hold supervisors jointly responsible for wasting the time of the members of their research group.
I'll tell you my experience.
It is undoubt that a lot of software gets created and wasted in the academia. Fact is that it's difficult to adapt research software, purposely created for a specific research objective, to a more general environment. Also, the product of academia are scientific papers, not software. The value of software in academia is zero. The data you produce with that software is evaluated, once you write a paper on it (which takes a lot of editorial time).
In most cases, however, research groups have recognized frequent patterns, which can be polished, tested and archived as internal knowledge. This is what I do with my personal toolkit. I grow it according to my research needs, only with those features that are "cross-project". Developing a personal toolkit is almost a requirement, as your scientific needs are most likely unique for some verse (otherwise you would not be doing research) and you want to have as low amount of external dependencies as possible (since if something evolves and breaks your stuff, you will not have the time to fix it).
Everything else, however, is too specific for a given project to be crystallized. I therefore tend not to encapsulate something that is clearly a one-time solver. I do, however, go back and improve it if, later on, other projects require the same piece of code.
Short project span, and the heat of research (e.g. the publish or perish vision so central today), requires agile, quick languages, and in general, languages that can be grasped quickly. Ph.Ds in genomics and quantum chemistry don't have formal programming background. In some cases, they don't even like it. So the language must be quick, easy, clean, flexible, and easy to understand later on. The latter point is capital, as there's no time to produce documentation, and it's guaranteed that in academia, everyone will leave sooner or later, you burn the group experience to zero every three years or so. Academia is a high risk industry that periodically fires all their hard-formed executors, keeping only some managers. Having a code that is maintainable and can be easily grasped by someone else is therefore capital. Also, never underestimate the power of a google search to solve your problems. With a well deployed language you are more likely to find answers to gotchas and issues you can stumble on.
Management is a problem as well. Waterfall is out of discussion. There is no time for paperwork programming (requirements, specs, design). Spiral is quite ok, but as low paperwork as possible is clearly recommended. Fact is that anything that does not give you an article in academia is wasted time. If you spend one month writing specs, it's a month wasted, and your contract expires in 11 months. Moreover, that fatty document counts zero or close to zero for your career (as many other things: administration and teaching are two examples). Of course, Agile methods are also out of discussion. Most development is made by groups that are far, and in general have a bunch of other things to do as well. Coding concentration comes in brief bursts during "spare time" between articles, and before or after meetings. The bazaar is the most likely, but the bazaar carries a lot of issues as well.
So, to answer your question, the best strategy is "slow accumulation" of known good software, development in small bursts with a quick and agile method and language. Good coding practices need to be taught during lectures, as good laboratory practices are taught during practical courses (eg. never put water in sulphuric acid, always the opposite)
The hardest part is the transition between "this is just for a paper" and "we're really going to use this."
If you know that the code will only be for a paper, fine, take short cuts. Hardcode everything you can. Don't waste time on extensive validation if the programmer is the only one who will ever run the code. Etc. The problem is when someone says "Great! Now let's use this for real" or "Now let's use it for this entirely different scenario than what it was developed and tested for."
A related challenge is having to explain why the software isn't ready for prime time even though it obviously works, i.e. it's prototype quality and not production quality. What do you mean you need to rewrite it?
I would recommend that you/they read "Clean Code"
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/ref=sr_1_1?ie=UTF8&s=books&qid=1251633753&sr=8-1
The basic idea of this book is that if you do not keep the code "clean", eventually the mess will stop you from making any progress.
The kind of big science I do (particle physics) has a small number of large, long-running projects (ROOT and Geant4, for instance). These are developed mostly by actual programming professionals. Using processes that would be recognized by anyone else in the industry.
Then, each collaboration has a number of project-wide programs which are developed collaboratively under the direction of a small number of senior programming scientists. These use at least the basic tools (always version control, often some kind of bug tracking or automated builds).
Finally almost every scientist works on their own programs. Use of process on these programs is very spotty, and they often suffer from all the ills that others have discussed (short lifetimes, poor coding skills, no review, lots of serial maintainers, Not Invented Here Syndrome, etc. etc.). The only advantage that is available here compared to small group science, is that they work with the tools I talked about above, so there is something that you can point to and say "That is what you want to achieve.".
Don't really have that much more to add to what has already been said. It's a difficult balance to strike because our priorities are different - academia is all about discovering new things, software engineering is more about getting things done according to specifications.
The most important thing I can think of is to try and extricate yourself from the culture of in-house development that goes on in academia and try to maintain a disciplined approach to development, difficult as that may be in many cases owing to time restraints, lack of experience etc. This control-freakery sucks away at individual responsibility and decision-making and leaves it in the hands of a few who do not necessarily know best
Get a good book on software development, Code Complete already mention is excellent, as well as any respected book on algorithms and data structures. Read up on how you will need to manage your data eg do you need fast lookup / hash-tables / binary trees. Don't reinvent the wheel - use the libraries and things like STL otherwise you are likely to be wasting time. There is a vast amount on the web including this very fine blog.
Many academics, besides sometimes being primadonna-ish and precious about any approach seen as businesslike, tend to be quite vague in their objectives. To put it mildly. For this reason alone it is vital to build up your own software arsenal of helper functions and recipes, eventually, hopefully ending up with a kind of flexible experimental framework that enables you to try out any combination of things without being to restricted to any particular problem area. Strongly resist the temptation to just dive into the problem at hand.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to avoid the situations when my developers do not share the common knowledge (solutions for the problems they encountered, cool tips, common mistakes, shortcuts for achieving particular goal, configuration issues, partial requirements, etc.) with each others. I'm taking about the situation when such lack of communication is accidental (a result of the misunderstanding or improper management) - I'm not thinking about the situations when developers deliberately keep the knowledge for themselves.
I believe that the following techniques are extremely useful to improve the information flow within the developers team:
XP pair programming - due to the knowledge exchange within the pair (and due to the regular pair mixing).
stand-up meetings - due to the occasion to tell the others on what you're working on and what problems you encountered.
trainings/presentations/coaching prepared by the lead-developers to the rest of the team/department.
"web 2.0 tools" - techie blogs for the company/department, dedicated twitter account of team leader, wiki's and stuff like that.
Any further ideas? What techniques do you use (or did you) in your company? How would you encourage developers to share the knowledge between themselves?
Trust.
You are allowed to 'seem stupid', but please ask if you don't know, or don't fully understand what I'm saying. And please tell me if I'm wrong (I didn't realize it because I'm equally stupid.)
I worked at one company where every Friday we had lunch meetings for developers. Management would provide food while developers had to share their knowledge; present some tool or technique one learned recently, or give a demo of a project you are working on, etc.
It wasn't restricted to the technologies that were being used by the team at that time, developers were encourage to learn new technologies and give a demo to the team.
And at my current job we have monthly IT group meeting, where sometimes developers from different teams demo off the projects they've been working on.
An internal twitter-esque utility. Maybe a wiki if you can get it to work, I personally find it a little too much. But twitter is different. "just added an extention method to escape a like clause in a rowfilter" and stuff like that.
Some people may find it a little overbearing, but a common location for utilities so you know where to look and string.CountOccurrences isn't scattered throughout the codebase.
I'll add a few more
Hire the right people - This is essential if you want to create a great dynamic (asocial people require a lot more effort)
Pre-mortem and post-mortem. We use the wiki for this, create a page for each of your projects, split it into section of recurring things (both goods and bad). At the end of each milestone, have the team meet to do a post-mortem. At the end of the project (or after a fix lenght of time), have project coordinator compile this into something easy to read for posterity (and put it on your wiki)
Daily stand-up are a must! You already said it, but I find it so helpful!
If you have multiple teams in the company, organize conference about one of their greatest achievements. If possible on a regular basis, even accros department, you would be surprised how artists can be interested into programmers work.
Lunch is a good time to share, in our company we have the president breakfast, project leads lunch, end of projects supper. I love them all, mix and match for greater results.
Offsite meeting with the whole company is great, we do it at least once a year (morning we present what's coming up in the futur, afternoon, is activities to learn about the projects)
Wikis are great, but beware of informations that can become false over time (this is a reccuring problem with any written informations)
A few more things to my mind:
Patterns & Practices meetings - These don't have to be every week but there should be some time devoted to where the team can discuss various outstanding questions and have concensus for things that may save a lot of people headaches.
Culture factor - Does the work place provide enough socializing to help the team gel or could some team-building exercises, e.g. an obstacle course or cooking together, be useful in getting some dynamics established. Is there a humility among developers so that there aren't big egos that can be a problem. Another factor here is to think about how you'd answer this: Would you go to a local pub and have a drink with your fellow teammates? If yes, then you have some good points here while if not, then there may be some investigation to do here.
Retrospective follow-up - How are ideas presented during retrospectives considered and implemented? How are meetings handled in general?
Demos within the team - If some story got finished and involved some big code points, then perhaps there should be a little demonstration of this for the team to see what was done and allow others to see what was done so that the knowledge does get spread around. This can dovetail with my first point in terms of being something that helps to further communication.
I'm a big proponent of working in pairs. It is a good way to transfer knowledge and keep communication lines open. Try mixing up the pairs for each project as well.
I've tried many approaches, and am a big fan of working in pairs on projects, as well as doing regular discussions or meetings with the team.
However, I've also found that the single best thing I can do is foster a culture of constant communication between the developers. I try to have all of my developers communicate with each other as they work - not even necessarily waiting until a weekly or monthly meeting.
For me, this is a little trickier as most of my developers are not in the same location, so we have a single XMPP chat room setup, and all of us are always logged in when we're working on the project. Some of the developers (including myself) will login during our off hours, as well.
I do the same with the people in my office - we tend to be a fairly quiet bunch, but I'm very open to having people interrupt each other with questions, or grab a chair and sit down to brainstorm at any time.
Part of why this works, though, is I try not to restrict the communication to the work at hand, or any specific project. My feeling is that people are going to talk about other, non-work related things, whether or not I foster that. I'd rather have the "water cooler" talk in an official channel, though, than outside.
This makes everybody feel more at ease to ask the questions that "seem obvious". Also, people ask questions continually, since they're right there, and used to talking to everybody. It's easy to ignore if needed, but also much easier to just throw out a general question and see if anybody has ideas without feeling like a pain, etc.
My experience is that the time lost due to interruption is much smaller than the time saved due to having a group that is always eager to help solve a problem at hand.
If you have a small enough team, using adequately SVN commit comments, and exploit them a tool that generates an RSS feed (like Trac for instance) can be an easy and efficient way to promote communication.
There are several requirements for this to work, which are quite easy to attain:
- commit frequently (that is good in itself, as it allows everybody to benefit from each programmer's local changes, and to identify problems early);
- use verbose comments (which is good to, as it allows to trace more easily what was changed, in case anything breaks down);
- ensure everybody actually reads (better even, keeps posted to, through an RSS reader) the feeds.
Of course, there is no way to "reply" to such comments, but if someone really needs to reply, it's probably between that person and the committer, so mail is usually enough.
An other useful tool is to ask each developer to, let's say, once a week, write a 10 or so bullet point list of recommendations for fellow coders, on a topic he/she is really familiar with.
Time.
Official
Getting out of your dusty office to clear your mind, really taking the time to go to a lecture or training, it all helps to spread knowledge.
It's also easy to budget: N developers go to meeting for T hours.
Unofficial
"On the job" training... The things you need for your specific job can only be taught by someone who knows the job.
In the current climate, under the current pressure (must ship now), no-one takes time to fully explain something. Only when people are relaxed, they are readyfor information sharing. People are relaxed when they have enough time.
Apart from that, you need to bump into some specific linker error before you really start thinking about it. Without the time to think, ask, read, you won't be able to get the knowledge. You can't postpone it to an official linker-training.
Way harder to budget: developer Mary asked developer Sophie about dynamic linkage for an hour and a half. The day after, she went back with some questions. Experienced developers will spend more time distributing, while younger will need more time learning.
no walls - Have all of your developers in one large, non-walled room - where everyone can see and talk with each other.
common goals - ensuring your team has a good understanding of goals INCLUDING the goal of self-improvement
rewarding - rewarding - even if nothing more then communication - reinforces what you are looking to accomplish
Socialization and common goal always encourages exchanege of information.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Based on what criteria they choose the projects and what are the things based on which they choose a project...?
Return on investment, if they want to stay in business.
Return on investment is ofcourse the final product. But it takes a number of factors to get there:
Their own expertise: Do we have people with skills needed to do this? Can we hire some?
Available resources: Programmers, Managers, Hardware, Time, Financial resources.
PR: Even if we dont get paid that much, will this project get us more business?
PR: Pay is great, but do we really want to be associated with this client?
Their Mission/Goals: What fields/niche do they want to compete in. Do they want to expand?
Past experiences: We did a project like this, it was horrible. Lets not do that again.
Past experiences: It was fun last time, AND we can reuse half the code! Lets do it!
Usually the management uses more sophisticated matrices and all to make their decision, but more or less, these are the factors they usually put in.
I am sure someone can provide a more specific/scientific answer.
Good question. The straightforward answer may seem to be Return on Investment (ROI). However, ROI is criticised for three reasons:
Short-termism: ROI is seldom calculated beyond 5-7 years (due to increasing discount rate on any cash flows produced in the future), some projects really worth doing realise full benefits much further in the future.
It’s hard or impossible to put monetary value on some things. The often cited example is human life. The other is moral principles. However, most frequently encountered thing in software world what is very hard to put a price on is opportunities that will never emerge unless this project goes live. It’s hard to put a value on the emerging opportunities, because we don’t know what they are until they actually emerge. And I don't mean opportunities that will simply not “open”, but specifically emerge.
ROI doesn’t take into account wider strategy. The importance of strategy in software world should not be underestimated and the strategy should take into the account specifics of providing software products or services. Geoffrey Moore’s “Crossing the Chasm” is a brilliant book I recommend and is very pertinent to the software world.
Joel’s recent instalment “Fruity treats, customization, and supersonics: FogBugz 7 is here” has a great sample of strategy document and the reasoning behind it. It seems that FogCreek plans to leave the bawling valley and enter the tornado (according to Geoffrey Moore’s classification) with their FogBugz 7.0 and hence the strategy of removing barriers that prevent people from switching to FogBugz, instead of spending time to introduce some more vertical features.
Other tools that can be used for selecting projects are SWOT analysis, Pareto analysis (i.e. choosing a project to address 20% of causes that are responsible for 80% of problems), PESTLE, Cost-Benefit analysis (similar to ROI, including the critique).
However, it seems that a sane strategy that states that the company is planning to do and not be doing in the finite period of time (often next year or two, in high tech market conditions are hard to predict beyond that horizon), gives a simple guidelines for choosing priorities and clear direction for joint efforts is the best starting point.
I also recommend reading a fabulous book “Almost Perfect” by Pete Peterson (former CEO of the maker of WordPerfect) that is available online. The book tells a real-life story of different strategies SSI Inc followed, some planned and stated and some ad hoc, and the way they were used to select what to work on.
ROI is only one measure. There are many other factors:
Risk management - for example, improving the process may not show any direct return on investment, but by adding e.g. unit tests the quality of the software can be improved and risk of a production bug reduced.
Compliance - there may be requirements by industry or government that need to be followed. Directly this may not show a return on investment because they may never be audited, but the downside to being non-compliant is huge (being shut down).
Manageability - providing metrics on bugs, project schedules etc. may not show a direct return on investment but it may allow them to better predict and manage their projects.
Security - this may be considered as a part of risk management, but it is a broad enough area to merit its own category. Making legacy code secure can cost a large amount of money and not show any immediate return, but there are obvious reasons why this is worthwhile.