Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
This book was written in the era of time sharing systems, procedural programming, and about 30 fewer years in software engineering experience. With the improvement of things such as existing libraries, higher level languages, IDES, and the amount of documentation and examples available on the internet how much of the book still holds true?
While I can believe that adding new people to a project may initially slow it down I would think things such as unit testing, separation of concerns, and other forms of automation and design improvements would allow new members of a team to become productive faster then assumed in the book, assuming the project had good design documentation and processes in place.
I don’t have experience on large projects or with large teams so am interested to hear what those of you who do have experiences with them think.
edit:
I was wondering if new communication tools such as Wikis, instant messaging, and the internet in general decreased the time spent communicating. Based on everyones answers I would say that any increase in communications efficiency has been offset by increased complexity.
It is still as true today as the day it was written. This is because it is fundamentally about communication between people working on the same project, and none of the advances of the past 30 years have substantially changed that.
Of course, we have learned a lot in those 30 years, but all improvements in our tools and undertanding have been incremental, in accordance with Brooks' "no silver bullet" prediction.
Isn't this kind of like asking if Sun Tzu's Art of War is still applicable to warfare since we have modern equipment?
The book still has things to tell us, and I for one have experienced the problems in communication that increased team sizes bring. You should be aware that unit tests, separation of concerns etc. are not new concepts.
However, some things have not stood the test of time. I don't believe that writing ASCII flow charts in your code is a good idea, and the "surgical team" approach suggested has been tried by several people (Charles Simony at MS, most famously) and found not to work too well.
The idea isn't that "large teams don't work", it's that "throwing people/money at the problem isn't the answer". Things like unit testing, separation of concerns, etc. are doing other things rather than just throwing people at the problem. These other things allow you to carefully add more people in the right place to speed things up. If anything, the points you make support the ideas of the book.
Both of the famous Brooks writings, "No Silver Bullet" and "The Mythical Man-Month" are explanations of fundamental limitations, in programming languages and project management respectively.
While true that some of the chapters a bit farther than halfway through TMMM deal too heavily with the technology of the time, the remaining chapters are still as relevant today as they were when written.
In TMMM, Brooks follows a technique of "outline the problem", "show some false starts", and "propose my own solution". Some commentators above has pointed out that his own solutions may be considered outdated at this point, but his concise description of the problems inherent in large projects make the book worth reading.
One theme he keeps coming back to is communications overhead as a limiting factor for large teams. As a thought experiment, consider the effect of the Internet as a communications medium for programmers, and the catalyst that has been to large open source projects.
Personally, I would read the book just for the "The Joys of the Craft" section. I've never read anything that so elegantly describes what programming at it's best feels like.
(If you're curious, it's on page 7, and viewable in the amazon.com "Look Inside!" feature)
I certainly think things like "No Silver Bullet" are just as applicable today as they were decades ago, especially as we see more and more young people come into industry and think x is the latest and greatest killer language/technology and all the other technologies will die because of it.
Granted, the references to Ada or sharing computers are antiquated, but the concept of accidental and essential difficulties, buy vs. build, how code is complex by definition because we don't repeat parts, and all the other theoretical topics are still completely accurate and relevant.
The other argument for why TMMM is relevant is that, it's not really about software itself but about how programmers get things done. In this way, it's hard for it to become obsolete.
The two that stick out in my mind: "version 2" still applies and so does "adding more people not necessarily faster".
Spolsky discusses "version 2" in his own way. I do not recall if he specifically links to MMM but it is very similar in concept.
Communication has become a lot more efficient than when MMM was identified, however, I think it's all proportional. It takes a lot more to make software production ready than it did when MMM was written.
Someone said that everything in computer science was discovered in 1960 something and everything since then has been derivative.
Read TMM as a book outlining a problem, perhaps THE problem, in software engineering: its not the technology, its the people! All the improvements you mention spring from that core realization. They are all in place to solve the problems Brooks laid out. This is the book that I'm sure Kent Beck and Ward Cunningham and Alister Cockburn and Martin Fowler all read, took to heart and then began to craft their silver bullets.
I consider this to be one of the "must read" books for anyone who wants to understand the software development process.
The demand for development workforce has been growing rapidly last 40 years, and this need will not stop. Since the rate of clever (vide Joel’s “smart and get things done”) people in the population remains generally the same, educating more and more developers each year means that the average smartness of a developer is getting lower.
40 years ago, it was demigods who became developers; 20 years ago, it was a job for clever people, while now when I look at young CS students at my Alma Mater it seems that they are taking everyone who knows what a computer is.
This does not mean a disaster is coming – western world keeps importing smart people (or outsourcing work) from emerging markets or third world countries. The new development tools make it easier to develop good applications. Those factors seem to neutralize each other, making MM-M eternal true.
The Mythical Man Month is a very dated read, but the core truths still apply. Sure Brooks discusses the need for a secretary which is clearly not true today and his concept of a surgical team doesn't work well, but most of the book is still accurate. His insight that communication requirements increase along with the size of the team is still true. His observation that adding people to a late project makes it later has been born out by a lot of projects. Unit tests help a little, but they don't stop one from misunderstanding the code or asking a lot of questions. No Silver Bullet also continues to stand the test of time.
All of it. The simple fact is that software projects are nontrivial; we build our own domain knowledge, really, directly into our solutions. Domain knowledge transfer is costly, both for the transferrer and for the transferee; this has not changed. And I, for one, believe it never will, no matter what the practices and tools used. Things may get marginally better, but the simple fact is that teaching and learning are both expensive and difficult things, and there's just no way to avoid them.
The social factors are still present, because humans are still essentially the same beasts we were 50 years ago.
The technical examples are almost completely obsolete, and only make sense when you think about the 0.034 MIPS System/360 of 1964. When you've only got 8 KB of memory, suggesting that the user should be responsible for handling leap years, instead of wasting 26 bytes of system memory (as Brooks did) made sense, but today it seems downright silly. I don't know any system that small today -- your telephone is thousands of times more powerful than the most powerful OS/360 system. Today we know a lot more about usability and human-computer interaction, and making the user responsible for that category of thing is just crazy.
One programmer can now write more code/build more software than one programmer could back then, but adding the second developer is not going to produce twice as much.
If/when I get on a project with good design documentation and processes in place, I'll let you know if that improves anything.
If you think of a large project that is late, it is then most likely in crisis/panic mode, as with most things in this mode, the best answer is a half decent plan, simply throwing more resources at a crisis doesn't solve anything other than waste resources and compund the problem.
There is no substitute for scheduling (pronounced shed-uling), planning and decent management.
As with most of these "one liners" or "golden rules", consider them more guidelines (with context) than set in stone.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mosaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad.
What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll find another developer, because you're expendable." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "My ideas are always great ideas, and if you don't agree, I'll find another (possibly cheaper) developer who agrees with me and does what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for developers because they go AWOL after 3-4 months of working with poor code, strict timelines, and insufficient client feedback.
So my question is the following:
Are the following symptoms of a project really such a bad thing for business?
high developer turn over rate
poorly built technology - often a patchwork of different and inappropriately used architectural styles
owners without a clear roadmap for their web project, and they request features on a whim
I've seen numerous businesses prosper with the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I need to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects..ie. Do I build long term solutions or band-aid solutions?
** At the risk of this post being closed as non-programming related, I'd like to argue that I think it is programming related because answers to this question will influence the way a developer approaches a project. He will have a better feel for how far in advance he should plan his development (ie. build short term or long term solution) knowing he may quit at any moment.
All three symptoms are bad. They really are a bad thing for business. That being said:
Software development exists to make tools. That's it. It's not an end, in and of itself - you're a tool maker.
There are very successful businesses that operate using poor tools. They may not be run as well as they should be, but good results can, and often do, come from bad tools. Also remember, though, that eliminating your three symptoms will likely make the company even more effective, especially in the long term.
High dev turnover is a symptom, not a cause. The cause is bad management. If those businesses prosper, it's usually in the short term and usually precedes a buyout, a merger, or an outright failure. I've seen it happen over and over.
If you can afford - run. There are bad companies out there but there are good ones too - at least better than the mess you describe.
All those 3 things are not good let me focus on turnover.
I'm seeing it happen right now. management/company are being cheap so they don't care much about the team, techonology or process, just the bottomline. So in turn (eventually) team members don't care about the project, just THEIR bottomline. After several months, they decide it's not worth the stress and move on. We are a small team of 6 developers, this year 3 people want out and it's just July. 2 people came in, one more is coming. Seems all we're doing is transition and project turnover. Team does not mature and is ineffective. our customer senses this, and instead of giving the team more projects (more money for company) they limit it to certains apps. I wonder when management will realize that being cheap is costly!
If I may take a Devil's Advocate view on this for a moment:
Some people like a challenge. Achieving extremely difficult things are very exciting for some people and there are some developers that enjoy finding those uber hard problems and work on those. Having something difficult to do appeals to some people.
The turnover means that each time someone is starting from scratch rather than retaining all the ideas and thoughts that the previous developer had in building out the software, whatever it was intended to do. Sometimes multiple heads can make a good thing. After all, how many people developed Windows 7? ;)
The poorly built point is where someone may think, "Oh, I can shine here by fixing some of this stuff," and at times it can work for a while. Ka-ching!
The lack of a roadmap and almost advocating the "Cowboy coding" style may appeal to those that want great autonomy and just move to their own beat. After all, who needs methodologies and best practices when one has supernatural powers to use to make this awesome stuff that will take no time at all?
There is the question of what is the root cause of the turn over rate for developers? Is it just that the project is killing developers or the pay is so bad almost anywhere else would be better or something else? Just something to ponder here as there can be many a way to get rid of a developer, both literally and figuratively.
To be serious about this for a moment, there are some people that do enjoy high pressure situations and others that want to avoid them at all costs. Most people are somwhere between the two extremes. Where do you think you fall on that scale though?
I'll address each of your three points in turn. High turnover in any industry is considered bad for business and a management problem. However, I've read several books about corporate politics and cultures and the effect those have on the corporate bottom line. One book I read studied several major corporations over a 20-year span. It found that poisonous cultures grow slowy and tend to be "lagging indicators" of bottom line performance problems. It also found that when some of the companies were able to hire new CEO's who ultimately "turned the ship around", it took 10 - 15 YEARS to stop the bleeding. So in a VERY big picture view, yes turnover is poisonous, although it truly is a symptom of the larger problem. A symptom that should not be ignored. (Even though it usually is ignored for long periods of time. Ever notice that it takes HR a very long time to realize that a department's turnover might be tied to a bad manager?)
Poorly built technical infrastructure - or products that are sold to customers are obviously bad for the bottom line. I think that only non-technical people fail to understand this. Of course there is a range between "not optimal but works" and "barely works as long as you restore the database once a week it gets us by". I think the reason this happens is that the cost portion of the "holy trinity" is always chosen in favor of quality. In my experience this is guaranteed to be a hard and fast rule. If management has to choose between cost, quality and schedule, quality is always the first tossed to the wayside.
The problem of owners without a clear roadmap and feature creep are a symptom of lack of business discipline. Feature creep costs money. And when it's bad enough, it can actually prevent anything from being completed.
The interesting thing to me about your question is that you say that they are thriving as a company, so it makes me wonder if the technology is as important to them. Maybe the problem is that they don't see the value in better technology (and they might be right in their case, I'm not sure what kind of business they are).
In general, a very high turn over rate for employees isn't good in any company. When it comes to software, high developer turn over is bad because of all the tutoring that has to be done for the new one, and the "big picture" knowledge that goes out of the door. So if software is important for the business, high turnover rate is bad for business.
Only doing requested features without a roadmap is a one way path to bloatware. If you have no clear strategy, goal or purpose for a product, your only source for what to do is customer requests, which might be bad. This is so because the customers might actually not know what they want, thereby requesting features they won't use.
From one perspective, the attitude(s) you're quoting are understandable. Software development isn't cheap, and most people/businesses are trying to save money everywhere they can. However I think they're usually shooting themselves in the foot with this sort of behavior.
One suggestion for dealing with this is to get a copy of The Mythical Man Month and read the section on why adding more programmers to a late project will only make it later (it's the title - and second (in my copy) - essay). Many of the same ideas apply to replacing a developer ... except that if you're working solo, it's likely you may as well start over, as figuring out what the previous person did may take longer than starting from scratch. After you've read the essay, give a copy to anyone who's taking the attitude you cite and ask them to read it. No guarantee that it will help, but it's worth a try.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I realize that the question is likely o get a lot of "it depends", but I am curious anyway. When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you are using, but know something similar, how much time do you budget for them to "get online."
I am talking about something fairly substantial, like a language, or a framework / product that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
In my own experience (10 years of experience, including a fair amount of consulting, so learning new technologies is par for the course), it takes me about three to six months of experience to become proficient at a new technology, and about a year to feel like I am approaching expert level where I know all the basics and medium-difficulty issues, along with a few areas very well.
What do you do in your projects? How do you budget the time to account for learning.
It doesn't only depend on the individual involved -- it crucially depends on the specific technology as well as the individual's background; certain technologies, esp. languages, are just harder and slower to get into. I've seen world-class Java gurus with zero previous exposure to C++ take many months, say on the order of six or so, to be fully productive in C++; vice versa (world-class C++ guru with zero previous exposure to Java) I've seen take about 2-3 months; again for extremely experienced and skilled programmers with no previous exposure to dynamic languages, being fully productive in Python can be expected to take 3-4 weeks. In each case I'm talking about 100% full-time involvement in the relevant technology, by a programmer in the world's top one percent in terms of skill and experience, within a team having several other programmers of that caliber who also gurus in the specific language in use.
Factors that can shorten the time are previous exposure to "similar" languages/technologies, e.g. a solid background in C makes C++ slightly faster to learn, solid background in C# helps with Java, solid background in Ruby or Perl helps with Python. Factors that can lengthen the time include lack of suitably experienced teammates, not being 100%-immersed in the "new thing", and psychological resistance (not really wanting to do it with all one's heart!-).
I've focused on programming languages for my examples, but some technologies can be even harder, i.e., take longer to master -- if you've never written embedded real-hard-time programs (no dynamic allocation of memory allowed, proofs of upper bound on response time required of all function) even six months might not suffice; some application areas require mastery of application domains that, all on their own, can take even longer (if to understand at all what's going on, and therefore be fully productive, you need the equivalent of a BSc in Psychology, or deep knowledge of the Law, or a CPA's qualifications, etc, well, each of those takes years on its own!).
I don't think the language as such is the issue, rather the programming paradigm it encompasses.
e.g. earlier this year I tried C#, coming from a Java perspective. That was all very straightforward. However, I'm now trying Scala. Because of the functional aspect, I expect to be learning and honing my skills for a lot longer (you can write Scala in an imperative fashion, but you don't leverage its strengths doing that).
I suspect the same would apply when (say) migrating from a relational database to an OO database, vs. a MS-SQL/Oracle migration.
It does depend, mainly on how closely the language resembles a language they already know, as well as individual abilities at picking up new things. Moving between similar languages like C++, Java, and C# is very easy. Similarly, moving from (say) Win32 to MFC to .net is going to be easier than from MFC to MacOS.
Moving from C to C++ is likely to take longer, as the programmer has to learn OO methodologies. Moving from C++ to Perl or ML could take a lot longer!
However, you usually don't need to know much to get started. Moving from C++ to C# can be done in a few hours reading (on the main differences) and then you can start writing (or modifying existing) code. That's because (a) you already know how to do OO programming, and (b) 95% of the syntax is identical.
But the main thing it depends on is your definition of "proficient". With similar languages, you will be able to write good code within a few days (an algorithm is usually failry language independent), but it usually takes months or years to become truly "proficient" in a language or large library.
So I'd say as a rule of thumb, "up to (a reasonable) speed" in a few weeks, but you might see silly "mistakes" or inefficiencies in their code for months/years until they learn all the little tricks of the language.
In the case of people learning OO, usually it seems to take a few days to get the basic concepts, and then at about the 2 year mark, a moment of epiphany occurs where the programmer suddenly relises that they truly "get" it. (I guess this is when your brain starts thinking fluently in OO rather than trying to think procedurally and then translate that into an OO aproach)
In our environment (US health care revenue cycle) it is more than just learning and becoming proficient in the language or technology stack we use to deliver our solutions to our customers. The developer also has to understand the problem domain. We work with entities that often don't document the behaviors of their systems well-enough for external entities (us) to communicate with to get the data that our customers we want. Our developers are forced to think beyond the specs to build a functioning system.
There is also the inevitable "It doesn't work; fix it" problem report from the customer support staff. Frequently the problem isn't a defect in our software; it is an issue with other entities with which our software communicates. Our developers have to be able to identify (and sometimes prove) that it isn't our software so that our business analyst-types can go to that other entity and explain the issue in a way that will get them to resolve the problem.
You were expecting this answer but it all depends on the person/programmer. I have been in a situation where two equally skilled programmers had to pick up something new, one got it right away, while the other one took some time. Previous exposures to other technologies are also a factor.
Personally, in regard to something new, I budget my time to learning everything about it every chance I get. It would take about 6 months to fully be comfortable.
Hope this helps.
I am talking about something fairly substantial, like a language, or a framework / product
that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you
are using, but know something similar, how much time do you budget for them to "get online."
Twenty-three work days, six hours, forty-three minutes, and seventeen point nine seconds.
What do you do in your projects? How do you budget the time to account for learning.
I think these questions are better!
Try to find an easy project in the new technology, and have them do that. If possible, have the person start by fixing bugs, then adding small features.
Learning is incremental. One can continue learning details of, say, C++ syntax throughout one's life. When one is an "expert" in a topic, it just means that the gains from learning more in that topic are growing smaller.
+1 for it depends.
It depends on such things as
the attitude and capabilities of the person learning it
is the problem area programming/paradigm well understood by that person
the similarity of the new technology / language to other technologies he/she does know
the consistency of the new technology / language in its interface (API, grammar, etc...)
what is proficient (knowing just the language, or als the basic library, or also runtime behaviour (interactions with underlying technology))
Having said that, in my experience a smart person learning a new language/technology will quickly be more productive than other people with more experience in that language/technology.
See Peter Norvig's Teach Yourself Programming in Ten Years for the related question of how long does it take to become proficient in programming.
It so completely depends on whether you already know languages that are similar to the new one, and know something about the problem domain the new language is suited for. I'd say don't expect to be reasonably proficient in less than 3-6 months, but again, it depends.
To take one example I implemented a PHP/MySQL web application a couple years ago (total effort was about 6 months). It was my first reasonably large web application, and my first PHP ever. I've used relational databases, but this was also my first exposure to MySQL. MySQL came very quickly, as expected, since it's really only a dialect of a language I knew well. What surprised me was that PHP also came quickly. I realized that not only did it borrow ideas from PERL and C/C++, but the whole paradigm of coding with integrated SQL statements strongly drew on some experience I had in the 90's with, of all things, Informix 4GL.
At the other end of the spectrum, I've never really learned a functional language, so I'm trying to pick up Scala. This is going to take substantially longer, and there'll be a long period where my Scala will feel like Java in disguise, and not be that functional.
So ... it depends! ;-)
I agree that it depends.
You also run the risk that if the person knows one technology/paradigm, they will code in the new language/technology using the old practices/paradigms.
For example, I picked up Python really fast (I'm a Java/C++ guy), but it took a long time since I stopped writing Java style code in Python and started thinking functionally.
To get really good, I think there's no replacement for experience. For instance, I'm sure I can easily pick up J2EE, but the experience to built up the best enterprise systems is not something you can pick up that fast.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What are useful strategies to adopt when you or the project does not have a clear idea of what the final (if any) product is going to be?
Let us take "research" to mean an exploration into an area where many things are not known or implemented and where a formal set of deliverables cannot be specified at the start of the project. This is common in STEM (science (physics, chemistry, biology, materials, etc.), technology engineering, medicine) and many areas of informatics and computer science. Software is created either as an end in itself (e.g. a new algorithm), a means of managing data (often experimental) and simulation (e.g. materials, reactions, etc.). It is usually created by small groups or individuals (I omit large science such as telescopes and hadron colliders where much emphasis is put of software engineering.)
Research software is characterised by (at least):
unknown outcome
unknown timescale
little formal project management
limited budgets (in academia at least)
unpredictability of third-party tools and libraries
changes in the outside world during the project (e.g. new discoveries which can be positive - save effort - or negative - getting scooped
Projects can be anything from days ("see if this is a worthwhile direction to go") to years ("this is my PhD topic") or longer. Frequently the people are not hired as software people but find they need to write code to get the research done or get infected by writing software. There is generally little credit for good software engineering - the "product" is a conference or journal publication.
However some of these projects turn out to be highly valuable - the most obvious area is genomics where in the early days scientists showed that dynamic programming was a revolutionary tool to help thinking about protein and nucleic structure - now this is a multi-billion industry (or more). The same is true for quantum mechanics codes to predict properties of substances.
The downside is that much code gets thrown away and it is difficult to build on. To try to overcome this we have build up libraries which are shared in the group and through the world as Open Source (but here again there is very little credit given). Many researchers reinvent the wheel ("head-down" programming where colleagues are not consulted and "hero" programming where someone tries to do the whole lot themself).
Too much formality at the start of a project often puts people off and innovation is lost (no-one will spend 2 months writing formal specs and unit tests). Too little and bad habits are developed and promulgated. Programming courses help but again it's difficult to get people doing them especially when you rely on their goodwill. Mentoring is extremely valuable but not always successful.
Are there online resources which can help to persuade people into good software habits?
EDIT: I'm grateful for dmckee (below) for pointing out a similar discussion. It's all good stuff and I particularly agree with version control as being one of the most important things that we can offer scientists (we offered this to our colleagues and got very good takeup). I also like the approach of the Software Carpentry course mentioned there.
It's extremely difficult. The environment both you and Stefano Borini describe is very accurate. I think there are three key factors which propagate the situation.
Short-term thinking
Lack of formal training and experience
Continuous turnover of grad students/postdocs to shoulder the brunt of new development
Short-term thinking. There are a few reasons that short-term thinking is the norm, most of them already well explained by Stefano. As well as the awful pressure to publish and the lack of recognition for software creation, I would emphasise the number of short-term contracts. There is simply very little advantage for more junior academics (PhD students and postdocs) to spend any time planning long-term software strategies, since contracts are 2-3 years. In the case of longer-term projects e.g. those based around the simulation code of a permanent member of staff, I have seen some applications of basic software engineering, things like simple version control, standard test cases, etc. However even in these cases, project management is extremely primitive.
Lack of formal training and experience. This is a serious handicap. In astronomy and astrophysics, programming is an essential tool, but understanding of the costs of development, particularly maintenance overheads, is extremely poor. Because scientists are normally smart people, there is a feeling that software engineering practices don't really apply to them, and that they can 'just make it work'. With more experience, most programmers realise that writing code that mostly works isn't the hard part; maintaining and extending it efficiently and safely is. Some scientific code is throwaway, and in these cases the quick and dirty approach is adequate. But all too often, the code will be used and reused for years to come, bringing consequent grief to all involved with it.
Continuous turnover of grad students/postdocs for new development. I think this is the key feature that allows the academic approach to software to continue to survive. If the code is horrendous and takes days to understand and debug, who pays that price? In general, it's not the original author (who has probably moved on). Nor is it the permanent member of staff, who is often only peripherally involved with new development. It is normally the graduate student who is implementing new algorithms, producing novel approaches, trying to extend the code in some way. Sometimes it will be a postdoc, hired specifically to work on adding some feature to an existing code, and contractually obliged to work on this area for some fraction of their time.
This model is hugely inefficient. I know a PhD student in astrophysics who spent over a year trying to implement a relatively basic piece of mathematics, only a few hundred lines of code, in an existing n-body code. Why did it take so long? Because she literally spent weeks trying to understand the existing, horribly written code, and how to add her calculation to it, and months more ineffectively debugging her problems due to the monolithic code structure, coupled with her own lack of experience. Note that there was almost no science involved in this process; just wasting time grappling with code. Who ultimately paid that price? Only her. She was the one who had to put more hours in to try and get enough results to make a PhD. Her supervisor will get another grad student after she's gone - and so the cycle continues.
The point I'm trying to make is that the problem with the software creation process in academia is endemic within the system itself, a function of the resources available and the type of work that is rewarded. The culture is deeply embedded throughout academia. I don't see any easy way of changing that culture through external resources or training. It's the system itself that needs to change, to reward people for writing substantial code, to place increased scrutiny on the correctness of results produced using scientific code, to recognise the importance of training and process in code, and to hold supervisors jointly responsible for wasting the time of the members of their research group.
I'll tell you my experience.
It is undoubt that a lot of software gets created and wasted in the academia. Fact is that it's difficult to adapt research software, purposely created for a specific research objective, to a more general environment. Also, the product of academia are scientific papers, not software. The value of software in academia is zero. The data you produce with that software is evaluated, once you write a paper on it (which takes a lot of editorial time).
In most cases, however, research groups have recognized frequent patterns, which can be polished, tested and archived as internal knowledge. This is what I do with my personal toolkit. I grow it according to my research needs, only with those features that are "cross-project". Developing a personal toolkit is almost a requirement, as your scientific needs are most likely unique for some verse (otherwise you would not be doing research) and you want to have as low amount of external dependencies as possible (since if something evolves and breaks your stuff, you will not have the time to fix it).
Everything else, however, is too specific for a given project to be crystallized. I therefore tend not to encapsulate something that is clearly a one-time solver. I do, however, go back and improve it if, later on, other projects require the same piece of code.
Short project span, and the heat of research (e.g. the publish or perish vision so central today), requires agile, quick languages, and in general, languages that can be grasped quickly. Ph.Ds in genomics and quantum chemistry don't have formal programming background. In some cases, they don't even like it. So the language must be quick, easy, clean, flexible, and easy to understand later on. The latter point is capital, as there's no time to produce documentation, and it's guaranteed that in academia, everyone will leave sooner or later, you burn the group experience to zero every three years or so. Academia is a high risk industry that periodically fires all their hard-formed executors, keeping only some managers. Having a code that is maintainable and can be easily grasped by someone else is therefore capital. Also, never underestimate the power of a google search to solve your problems. With a well deployed language you are more likely to find answers to gotchas and issues you can stumble on.
Management is a problem as well. Waterfall is out of discussion. There is no time for paperwork programming (requirements, specs, design). Spiral is quite ok, but as low paperwork as possible is clearly recommended. Fact is that anything that does not give you an article in academia is wasted time. If you spend one month writing specs, it's a month wasted, and your contract expires in 11 months. Moreover, that fatty document counts zero or close to zero for your career (as many other things: administration and teaching are two examples). Of course, Agile methods are also out of discussion. Most development is made by groups that are far, and in general have a bunch of other things to do as well. Coding concentration comes in brief bursts during "spare time" between articles, and before or after meetings. The bazaar is the most likely, but the bazaar carries a lot of issues as well.
So, to answer your question, the best strategy is "slow accumulation" of known good software, development in small bursts with a quick and agile method and language. Good coding practices need to be taught during lectures, as good laboratory practices are taught during practical courses (eg. never put water in sulphuric acid, always the opposite)
The hardest part is the transition between "this is just for a paper" and "we're really going to use this."
If you know that the code will only be for a paper, fine, take short cuts. Hardcode everything you can. Don't waste time on extensive validation if the programmer is the only one who will ever run the code. Etc. The problem is when someone says "Great! Now let's use this for real" or "Now let's use it for this entirely different scenario than what it was developed and tested for."
A related challenge is having to explain why the software isn't ready for prime time even though it obviously works, i.e. it's prototype quality and not production quality. What do you mean you need to rewrite it?
I would recommend that you/they read "Clean Code"
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/ref=sr_1_1?ie=UTF8&s=books&qid=1251633753&sr=8-1
The basic idea of this book is that if you do not keep the code "clean", eventually the mess will stop you from making any progress.
The kind of big science I do (particle physics) has a small number of large, long-running projects (ROOT and Geant4, for instance). These are developed mostly by actual programming professionals. Using processes that would be recognized by anyone else in the industry.
Then, each collaboration has a number of project-wide programs which are developed collaboratively under the direction of a small number of senior programming scientists. These use at least the basic tools (always version control, often some kind of bug tracking or automated builds).
Finally almost every scientist works on their own programs. Use of process on these programs is very spotty, and they often suffer from all the ills that others have discussed (short lifetimes, poor coding skills, no review, lots of serial maintainers, Not Invented Here Syndrome, etc. etc.). The only advantage that is available here compared to small group science, is that they work with the tools I talked about above, so there is something that you can point to and say "That is what you want to achieve.".
Don't really have that much more to add to what has already been said. It's a difficult balance to strike because our priorities are different - academia is all about discovering new things, software engineering is more about getting things done according to specifications.
The most important thing I can think of is to try and extricate yourself from the culture of in-house development that goes on in academia and try to maintain a disciplined approach to development, difficult as that may be in many cases owing to time restraints, lack of experience etc. This control-freakery sucks away at individual responsibility and decision-making and leaves it in the hands of a few who do not necessarily know best
Get a good book on software development, Code Complete already mention is excellent, as well as any respected book on algorithms and data structures. Read up on how you will need to manage your data eg do you need fast lookup / hash-tables / binary trees. Don't reinvent the wheel - use the libraries and things like STL otherwise you are likely to be wasting time. There is a vast amount on the web including this very fine blog.
Many academics, besides sometimes being primadonna-ish and precious about any approach seen as businesslike, tend to be quite vague in their objectives. To put it mildly. For this reason alone it is vital to build up your own software arsenal of helper functions and recipes, eventually, hopefully ending up with a kind of flexible experimental framework that enables you to try out any combination of things without being to restricted to any particular problem area. Strongly resist the temptation to just dive into the problem at hand.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have been in the IT industry for 10 years now but have worked in "traditionally" managed project teams (both well managed and badly managed ones).
I have heard of the "new" scrum or XP type of project management and yearned to be part of one (as s/w folks we always like anything new I guess) but have not got an opportunity.
My question is this - what are your experiences in moving to the "new" way - was it significantly better or worse or not any different? Has there been any project success rate improvement when using XP way of development or it is same as any well managed traditional projects?
This should not be a political question but just your experiences as you have moved to the new world or experienced at least once and back.
Thanks in advance
Before I ever heard of XP, I had a really good manager (Mike) at an early job I had. He was used to managing engineers and transitioned to managing software. After a few bad working experiences I looked back at his style versus typical project management I had before and after working with him.
Met with everyone at least once a day but gave us space to work
Used a whiteboard with two columns, people working and what they are working on anyone could look at that board and see if something had been done or was being done
Had everyone cross-train. I learned rcs and then cvs there and how to use make files
Ran productive "post mortum" when a task was completed. He would ask question like "would it have helped if X?" or "next time, can we try to..."
Kept everyone working on short tasks and managed our time so we always working on something but never had a ton of stuff piled up
Mike did everything on paper. He would keep notebooks and index cards with him. He insisted that anything asked of him by management be converted into manageable tasks, often written on note cards. He refused to have anyone work on anything that couldn't be clearly explained or had a clear objective. He would ask the VPs "what do you mean by faster?" "What kinds of metrics are the reports meant to show?" "Why should this be a priority?" He seemed to have near infinite patience in writing out what needed to be done and what was meant by "done"
When I first read the XP book, I was amazed by how much was familiar as "the way Mike worked"
It seems that Agile is just about implementing a set of best practices and evaluating how they work in your environment. When they don't work, change them. When they do work, stick to them.
I think the real problem with traditional project management is that more often than not, it doesn't really exist. I'm amazed by how many shops claim to use RUP or Code Complete or even Agile and don't actually have anything recognizable as project management. Sure, there are meetings. And people called project managers. But ask a simple question like "what has been done on project X" or "what is left to do on project Y" and no one has an answer. They have to dig though emails or point to a comically inaccurate MS project file.
If a person claimed to be on a diet and couldn't answer questions on what they were eating or how they were exercising; would you accept that they were really on a diet?
You take your old baggage with you when you go. Meaning that any project management bad practices you had before will still linger.
However, I will say that things improved greatly when we began to close the loop between us and the customer. Greater and more frequent feedback and prototyping with the customer means far fewer moments of the customer saying, "This is not what I wanted."
I've used (a slightly modified) Scrum before at work and here are my thoughts:
The daily meetings and burn-down provided motivation to make progress on tasks.
Our manager could talk to colleagues across the pond and show them "this is what we're working on this month."
You knew exactly what tasks you needed to get done, and had already estimated the time required to complete.
When priorities changed (new tasks, important bugs added), there was a well-defined process to handle adding them to the sprint or simply pushing them to the backlog.
These are lovely answers, but I think everyone's confusing project management with development/design methodologies.
I'm on a team that started Scrum a few months ago and we seem to be getting things done faster and with much less "waste" (projects that are scrapped). Just my observations from our small team (4 devs).
I've found the overall move to Agile/XP practices very positive, in many ways it front loads quality into the project/development process. You'll need buy-in from management and from the team to really see success...a few suggestions:
trial any change with a small project (2-3 people)
understand what areas your current team can most improve (quality? productivity? time-to-market?) and incorporate a few Agile/XP/Scrum (what ever) processes in...don't incorporate them all in at the same time and understand which processes address which issues prior to any change
if possible - track those areas you're looking to change and compare to another project running at the same time (the mere focus of improving something often is enough to improve it ,there's a study/term for this, but I forget what it is)
sometimes you'll see a dip in performance as you begin a new process, this is part of the learning curve
never assume that a good change today will remain a good change tomorrow, always review your project areas and be ready to change any process at any time
no change remains good forever, just like refactoring code, refactor your processes
ensure you get buy in from the team and management, you can't force success
I like some of the things the agile approaches do, but I also value some of the things traditional approaches do.
Both can work, as can a mixture of the two, which is what I find works best for my team now. I have implemented incremental development and it really helps us; iterative development is a little harder and we're still working on that. However, we have a variety of constituents, and many of our stakeholders (and PMs) prefer traditional artifacts and milestones. So we have to keep finding the right balance.
I have also found that even more important than the methodology is the people implementing it. Good people find a way to do good work and get things done regardless of the methodology, although certainly the methodology can have effects on efficiency (and morale :) ). Poorly aligned resources, however, can use the finest methodology and find ways to deliver poor results.
For developers, the great lessons of XP & Co. are shorter release cycles, and a more evolutionary approach - in the sense that change of requirements is accepted as a natural part of any project. Also, Customers suggest solutions, but designers and developers need to understand the problems.
Lessons for managers: Developers are not exchangable spec-to-code-converters, their individual strengths and weaknesses can make a productivity difference of 10 or more for a given topic. Knowledge and experience are the most valuable skills in your team, and developers can teach each oterh. Managers need not understand what developers do in order to enforce desired results.
XP & Co. are usually mixing solutions to these with the problem to make a company change. The heroic XP consultant singlehandledly saving a doomed, delayed and derailed project acts as large part as a buffer between development and management. But if you are looking at what to learn, you have to separate these aspects.
What I learnt in the recent years is that bugs aren't a personality fault, and that the sky doesn't fall when specs change. I've learnt that while design errors are still the most expensive to make, there isn't a single "perfect" design. Instead of getting one thing right we need to implement safeguards that of all the many details none goes wrong - and I've learnt to use the leeway between "right" and "not wrong" to our advantage.
My experience has been that I prefer to use Scrum over traditional approaches as it hasn't happened often that requirements could stay unchanged for the length of a project where usually projects seem to run at least 6 months to my current one that is over a year.
There can also be the case where there isn't any project management and everyone just scrambles to "make it work" so having some formal structure is good over nothing. There is something to the question of how well does the team come together and egos rarely appear as it isn't someone's code but rather the code of the team and there is a kind of group think where while each person has their view, no one tries to make everyone else see things that way.
At times it seems to me that some Scrum and Agile approaches I've used end up being like rapids instead of a big waterfall. What I mean is that the cycle of gather requirements - Analyse and Design - Implement - Test - Deploy and get updated requirements seems to be repeated over and over so that what comes out in the end would be extremely hard to state at the beginning of the project unless the project sponsor could give very detailed requirements that would never change.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
What are the main reasons personal projects (software apps etc) never get to the level of competing with your salary?
To me one big problem is "on-the-fly" feature expansion, with this problem, the end only gets further and further away!
For me, it's simple: I work 8 hours a day already. I spend a few more hours a day keeping current. I have a girlfriend, some local family and a decent circle of friends. I have (gasp) non-computer-related interests and hobbies. In other words, I have a life.
So ... Time. Time is not on my side. Would that it was ... My blog might be a bit more current if there were just two more hours in every day. :)
(Originally posted by John Rudy.)
If you want your hobby to become your job you have to acquire all the other skills you need to be in business. At the end of the day your pet project has to stand on its own two feet in the real world. At the same time you are enjoying the coding you need to get yourself a concrete plan to commercialise your activity.
Most hobby projects fail to make the big time for one of two reasons:
The idea is not commercially viable
The discipline necessary to commercialise the idea is missing
Just because you are a great technologist does not mean you'll be a great businessman. You may be, but the two are not necessarily linked. It is no weakness to consider partnering with someone who has no technical skills but a good network and some proven business acumen. Quite often people like that are looking for techies too so you might find a great partnership. That person can provide the structure and commercial discipline that you probably lack if feature creep is pushing your completion backwards.
I think the primary reason is the simple work overload that most developers experience. Most personal projects take place in the evening and weekends, and as excited as most of us get about our ideas for personal projects, after 40 hours (or more) of salaried programming, it's hard for "more work" to compete with watching a game while sipping a beer or spending quality time with the family.
Different skill sets are required to start and maintain a business than to develop software. Entrepreneurship skills can be learned, but not every has the skills to make it happen. A lot of times the skills it takes to get something started and off the ground are different than the skills it takes to finish it and polish it. For me, I know that I have the creativity to make software and find ways to solve problems, but I have little interest in finding funding for a business and marketing a product or service.
Assuming that you're a developer, it's most likely due to the fact that you do not know when, or are incapable of, stopping development and focusing on other things, like marketing and sales.
Time and Losing Interest, there is always a new tool or technology that can take your attention away from completing projects.
I'm not sure if I understand your question, but here are a few answers:
Adding "on-the-fly" features isn't necessarily a bad thing. In fact, it's the expected model of Web 2.0 and Web 3.0 projects. The key is to keep them very simple, only roll them out once they've been tested, and listen to your users. If you try to dump the kitchen sink in on the first release, it will most likely be ugly, confusing, and buggy.
Being a great programmer is only a part of it. You need business skills, marketing, knowledge of the user's needs and how to meet them, artistic/design skills, and a hell of a lot of luck.
Lot's of people have great ideas. Often different people have the same ideas. Most never get implemented. Of those that do, very few of them succeed. In some cases, revolutionary products took years to convince the buyers and users that they even wanted the product. Often the people or companies behind the first few iterations failed miserably and then a third or fourth person or company finally hit the market at the right time with a right product. Apple is great at both ends of this by the way - they not only innovate (first Mac, the Newton, etc.), but they also wait until the market need grows and they sense a place to pounce in and take advantage of it (the iPod, the Mac vs. Windows issues, etc.)
Most of these bullets apply as much to software as they do to widgets and services. The big advantage that software has is lower startup costs. Just like the saying "On the Internet, no one knows you're a dog" - "When looking at a web app, the user doesn't know if you are a multi-billion dollar company or a single guy sitting in your underwear in your parent's basement." If your software is good, that is...
I'd say one of the big reasons is that by nature, personal projects don't get as much attention as your job will.
I have a slew of personal/side projects I'm working on, but they get far less of my attention that my 'real' work does because, right now, that's what's paying the bills.
If I were to take a month off and work only on my personal stuff, it'd probably be pretty cool / worth money.
developers often design for themselves instead of for their customers
developers tend to put off releasing products until things are 'perfect' - and they never will be
Weakness of mind and spirit. Build a team around your product early.
Scope creep. Concentrate on selling what you have already got: "The customer can have any color he wants so long as it's black". Henry Ford
Small feature set. Leverage features of your product by what is already available on the market.
Not enough hours spent daily. Often achieving something might depend just on simple routine, putting your time in.
Deep down I think its a lack of belief in the project. If I believed in what I was doing I would not stop in completing the project.
Desire to build an ideal product
For example: There are various ways (algorithms) to get a particular task done. But, people wait to discover that one ideal solution. Even if there are multiple solutions for the same problem already available. That ideal solution is never found.
Procrastination
Your personal software projects don't compete with your salary for one reason.
What do you do for your salary? Whatever that is -- however much you may like or dislike it -- it more valuable than your software product.
"But my day job involves a lot of stupid time-wasting meetings." So? Clearly, someone will pay you more for wasting your time in meetings than for your software products.
"But my day job forces me to waste months in useless analysis and design documents and test plans that never even get used." So? Clearly, someone thinks this activity is more important than writing software.
"How can meetings or useless documents be more valuable than software?" I don't know, but look at your experience. Companies love to pay programmers relatively large amounts of money to hang around and waste time.
Companies don't love to pay for software.
Your personal projects don't compete with your salary because your time is more valuable than your products.
The biggest reason? Because if you can write it yourself and people like it, someone else can make an open source version with much better support than you can provide alone. Why not skip the middle man and release it as open source yourself? Sure, you miss out on the direct profit, but that looks very good come hiring time.