Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently got promoted to be the Project Manager/Supervisor. What do you think the leadership style a managerial role in Programming Dev't should have?
What's your style?
Hands-off, servant leadership, unofficial or "tribal" leadership over traditional management, seem to be all the rage these days.
Basically getting out of the way and allowing the team to get their job done seems to make sense to me, but it all depends on the culture.
I would say an effective manager would already have a style, and know how he wants to work, whereas a less effective manager would probably learn from other senior managers and simply emulate "however things get done around here". Actually in a lot of places the latter is the only choice.
If you have the freedom to do things the way you want, I would probably prefer to borrow ideas from the agile/lean camp than the more traditional PMI/Prince2/PMBOK camp, but it all depends really.
The job of a manager is to get out of the way and let the developers do their jobs. If they encounter an obstacle it is your task to remove the obstacle.
I do not believe to simple management guidelines. In an ideal world, the job of a software manager would be to just provide food, computers, electricity and salaries, but we are hardly in an ideal world.
In a way, being a manager is a highway to frustration. There are few opportunities for a direct contribution to the project, you spend most of the time on planning, meetings, writing reports, and proposing future projects. In a nutshell, you have the responsibilities, while they have the joy of building things. In order to avoid quitting the job due to lack of fun, one needs to find a proper motivation which would justify the troubles.
Now, different people are motivated by different things. Some people like to participate in group efforts, some like the achievement in building things which can't be built by a lone enterpreneur, some like the power, some like the money. I think that a management style should be tailored to the intrinsic motivations of all of the involved parties. For example, it is useless to try to motivate your coworkers with money if they are primarily interested in building cool things (and vice versa).
A key competence in managing people is being able to address conflicts as early as possible. The conflicts range from trivial (X keeps committing buggy code to the repository) to critical (we need to hurry up in order to hit a deadline). I think it is very important to be able to express such concerns frankly and clearly, regardless of the managements style. Thus, at the end of day, oral communication capacities would be at least equally important as the management style.
I dont think it matters alot what style you choose. When leadership is "broken" it usualy is due to more basic things not done right.
Consistency: stick to your style unless you are sure it doesnt work out.
Honesty: Might seem obvious but when "fooling arround" too much with carrot and stick it can get out of control
Respect: Tech-Guys have all different characters but grasping what they value is easy - being passionate about technology and using it in professional way will open hearts. Waveing about your iphone showing off fancy looking but technologicaly trivial apps might result in the opposite ;)
Lead by example: You techs do extra hours? You do extra hours too!
Motivation: You dont need a jungle camp every 3 weeks but you can still help everyone to feel better about seeing each other more often than the family. Implement a friday afternoon beer-session if that is acceptable (be strickt about times though, no drinking before 6pm for exmaple). Show interest in what people are working on even if you are not part of operations. When working on abstract subjects people can have a hard time to put into relation what value they add to the company and to the team. When "in the jum" programmers particularily can become like lone astronauts - Having a broader understanding about your business you will need to remind people about the mission (though thats PM tasks mostly, but no PM is perfect too)).
In the end you are good leader when your team says "WE did it!"
There are plenty of methods with catchy names, but in general I prefer the management style to be lightweight and encourage communication.
I suspect a lot of us have had the experience of having to spend more time filling out forms than actually developing. Than is both frustrating and unnecessary. Controls are important, but a new form is not the solution to every managerial problem.
As far as communication goes, many managers seem to believe that it will work if everyone reports up to them and then they send the collected information back down. That can really lead to disaster. The team needs to communicate with each other well and often.
Finally, I'd like to throw in that as tempting as it is to take a new resource for a project and get them developing as quick as possible, I think it will always work out better in the long run to hold off and get them properly trained and oriented to the project.
My style is a combination of Attilla the Hun, Napoleon Bonaparte and Nelson Mandela. Whatever you do, don't try to adopt my style.
More seriously, to be a good leader you have to develop your own style and you have to integrate that into the culture of the organisation you work in. So, the answer to your question must start with asking yourself some penetrating questions and giving honest answers to them. You must also take some time to understand the traits of the individuals in your team and figure out what makes them tick and how to motivate them as individuals. What works with one may not work with another.
And, while I'm writing, I'll direct a passing kick at the respondents who suggest that it is a manager's job to get out of the way and let the team work: it's the manager's job to manage, you have people you work for who have certain expectations of you and you have to pay attention to them as well as to the losers on your team.
I write 'losers' because you have just been promoted and they haven't. Sure, you have to lead them to great achievement but you won't do that by keeping out of their way, you'll do it by leading them in the right direction, with the right mix of carrot and stick. Oh, and don't let them know that you think they are losers, it will upset them.
First of all; if you try to adopt a "style" that's not your own, you will most likely fail. You basically just have to be yourself! (That's probably why you got promoted in the first place) That said, there are some theorems to embrace, one beeing "you can always be a better leader" ;) I guess that's part of why you posted this question. My advise is to support your co-workers, and remember that it's your job to make them as good as possible. Try to keep yourself on top of all that happens within the project and encourage communication within the team. Agile style development helps with that. Also, try to put yourself in your co-workers shoes and try to imagine what they expect and want from you. Best of luck
There is no one "style" that you can or indeed should focus on. The reality is that you are now a people manager and people are all different. You need to learn to recognize the differences in the people you are managing and respond accordingly. This is a technical role, so if you have some technical understanding then this will assist with gaining respect of the team.
Some people need to be told what/how, some people need a gentle prod and some need full ownership of a task. Learning to spot the differences is where you need to apply yourself.
Typically people fall into 4 distinct camps with different names depending on the management course of the day :)
Beginner, highly motivated, not much experience, needs a more directive approach
Learner, more capable, but may be experiencing frustration, needs coaching
Performer, very capable but may lack confidence, needs supporting in their approach
Achiever, capable and committed, needs delegation of tasks
Management 3.0 Leading Agile Developers, Developing Agile Leaders by Jurgen is a book dedicated to answering this questions. http://www.management30.com/. His home page is here http://www.jurgenappelo.com/
In his book and class, he refers to Martie, the Management 3.0 model. It is composed of
Energize People
Empower Teams
Align Constraints
Develop Competence
Grow Structure
Improve Everything
An excellent introductory presentation can be found here: http://www.slideshare.net/jurgenappelo/what-is-agile-management
Jurgen's two key takeaways.
A software team is a self-organizing system. Support it, don't obstruct it.
Agile managers work the system around the team, not the people in the team.
Enjoy.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am final year college student. I am trying to figure out how much time i should spend on coding and learning.
while (true) {
learn;
code;
}
I have two friends in my university, both studying media informatics, and both were absolute beginners in programming.
The first one reads a lot at home if he has to learn new languages for a project but has never had a private programming project.
The second one reads a bit but has his own python project. A web application for his friends, where you can bet on soccer results.
Both in comparison:The first guy is slow in programming and always stumble upon simple things and his code can be optimized (in line numbers and comments) at least by 5. And in two days he will stumble upon the same issue again...
The second guy is much faster, can easily read foreign code and languages and stumble upon one problem at most two times, the third time he used what he has learned....
So imho, doing your own project, where you code because you love it, where you work until morning to fix a bug or to finish an implementation, is the very best way to learn!
Surely you've realized that putting a finite measurement on the amount of time you should spend coding, is futile and hugely irrelevant.
Do what you want, but always try and keep up to date.
When I first started programming, it seems that I learn new things by leaps and bounds. Functions, classes, inheritance and etc. But after a while, I realize that you learn by coding. I load myself up with tonnes of reading material - Effective C++, Modern C++, but nothing beats them when I actually sat down and code.
Of course, writing your code the same way over and over again does not make you a better programmer. You have to learn to think - how do I make it reusable? less error prone? portable? immune to changes in other areas of the application? easier to maintain? Is there a better way to do this?
Eventually, the learning peaks, and what you learn are what I like to term as multipliers. It's like knowing that dirname(__FILE__) in PHP returns the current directory which an include file is in. It's like finding out what's a ORM and how by abstracting away the DB you can focus more on the code logic rather than an endless routines of writing INSERTS and UPDATEs SQL statements. It's like learning smart pointers and effective use of STL in C++, using Firebug effectively when doing JavaScript/CSS/HTML...and lots more.
So code; once you get frustrated about something ("There must be a better way to do this than now!"), search for a better way - this is how I learn, anyway.
When I was young:
Monday to Friday, 10am to 7pm, programming in office
Saturday afternoon, reading in Chapters
Monday to Saturday, 9pm to 1am, programming at home
Sunday, drive to downtown and pick up a few books from the bookstore
those were the days when Google was know as nntp
These days:
Monday to Friday, 10am to 7pm, coding in office (too bad I am on web now ;-)
9pm to 1am, coding on my MacBook Air on a few iPhone projects
Saturday and Sunday, coding for another 16 hours
too bad, Google interrupts me too much and I cannot count how many hours are spent on reading blog and pdf books ...
You have to decide that yourself. If you're constantly feeling like you should spend more time coding, then you're probably right. You should never force yourself to the point where the sight of a curly bracket makes you want to puke either. If you're sufficiently interested in programming then the amount of time you spend naturally without slacking off/burning out, will be just fine. (And if you're not, you should cut your losses as soon as possible.)
Be sure that this kind of approach does not make you less valuable of a programmer than the angry nerd in your class who spends every waking hour coding as a part of his masterplan to get back at the world.
the simple answer: do not create some sort of a schedule
why?
you never can know ahead what situation you're in during a certain time, so let's say you set it at everyday at 10am, then suddenly your dog died today at 10am, your family called you to mourn over poor Snuffel...for hours; schedule's all ruined
so what do you do?
code up; if you get tired grab a book or read an article (articles today are really juicy), if you get tired of reading and coding, play games that rattle your brain (yet entertaining, something like Civilizations IV). if you're all rested, fire up your IDE and apply what you just read about. Don't worry if you get it all messy the first time (unless you're a mad genius who certainly will kill himself if he doesn't get something right on his first try).
Note: you should probably set a time for how long you play the game, though:)
My suggestion would be to discover your strengths and if learning is among them, then you may enjoy spending a lot of time learning so do what you want here. Of course one shouldn't go so overboard that things like hygiene get sacrificed, so do try to maintain some basic standard of existing which includes the basics of cleaning your place, yourself, and that kind of thing.
For myself, I'd say that I'm almost always trying to learn something, somewhere. Maybe it is learning about how much patience I have in traffic or how well can I handle this curveball that life has thrown me by my having to do things like income taxes and discover what has changed in the software or tax laws from the previous year. If you look at life as a series of opportunities, you may learn a lot in the world.
In my humble opinion most of the time you are programming. While you program you are learning from experience. This is one type of learning. Another type of learning comes from reading books and other resources (courses, internet, Development conventions). I use books to keep up with technology and to better understand what I am doing. I read almost every day from 0.5-1.0 hour. It depends on your free time and the type of person you are.
Please take into account that there are more ways to learn: code reviews, reading other people's code and I am sure that there are more that I didn't mention here.
Anyway, good luck.
I am guessing the 'learning' here means, acquiring new tips and tricks, grapsing new technology in the market and stay up to date with the trends in technology.
From my experience it is taking around 20% time for learning, and it is mainly because I work on all latest technologies from Microsoft like WPF/Silverlight/Surface. But this % of time will really depends on your personal interest/ organizational interest and the type of career growth you are looking forward to.
And if your job is merely converting domain/business logic to the code which doesn't involve critical technology roadblocks then it might be close to 0% time you need to spend on learning.
Since you didn't present any constraints or conditions in your question then the simplest answer I can give is:
Spend as much as you want.
The very fact you have to ask, might mean you are not ideally matched to writing code. First and foremost you should love coding, and finding out how things work.
This is not a profession where you ever stand still. Totally agree with another poster, in that you should always be looking for a better way, and recognising when there is no better way.
The best software workers - the rockstars if you will - are always on. Any situation can be a teaching. For instance, consider Gregor Hohpe's article Starbucks Does Not Use Two-Phase Commit, in which he analyses how the coffee vendor uses asynchronous processing to maximize throughput of customers' orders.
Coding == Learning
In my opinion.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I realize that the question is likely o get a lot of "it depends", but I am curious anyway. When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you are using, but know something similar, how much time do you budget for them to "get online."
I am talking about something fairly substantial, like a language, or a framework / product that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
In my own experience (10 years of experience, including a fair amount of consulting, so learning new technologies is par for the course), it takes me about three to six months of experience to become proficient at a new technology, and about a year to feel like I am approaching expert level where I know all the basics and medium-difficulty issues, along with a few areas very well.
What do you do in your projects? How do you budget the time to account for learning.
It doesn't only depend on the individual involved -- it crucially depends on the specific technology as well as the individual's background; certain technologies, esp. languages, are just harder and slower to get into. I've seen world-class Java gurus with zero previous exposure to C++ take many months, say on the order of six or so, to be fully productive in C++; vice versa (world-class C++ guru with zero previous exposure to Java) I've seen take about 2-3 months; again for extremely experienced and skilled programmers with no previous exposure to dynamic languages, being fully productive in Python can be expected to take 3-4 weeks. In each case I'm talking about 100% full-time involvement in the relevant technology, by a programmer in the world's top one percent in terms of skill and experience, within a team having several other programmers of that caliber who also gurus in the specific language in use.
Factors that can shorten the time are previous exposure to "similar" languages/technologies, e.g. a solid background in C makes C++ slightly faster to learn, solid background in C# helps with Java, solid background in Ruby or Perl helps with Python. Factors that can lengthen the time include lack of suitably experienced teammates, not being 100%-immersed in the "new thing", and psychological resistance (not really wanting to do it with all one's heart!-).
I've focused on programming languages for my examples, but some technologies can be even harder, i.e., take longer to master -- if you've never written embedded real-hard-time programs (no dynamic allocation of memory allowed, proofs of upper bound on response time required of all function) even six months might not suffice; some application areas require mastery of application domains that, all on their own, can take even longer (if to understand at all what's going on, and therefore be fully productive, you need the equivalent of a BSc in Psychology, or deep knowledge of the Law, or a CPA's qualifications, etc, well, each of those takes years on its own!).
I don't think the language as such is the issue, rather the programming paradigm it encompasses.
e.g. earlier this year I tried C#, coming from a Java perspective. That was all very straightforward. However, I'm now trying Scala. Because of the functional aspect, I expect to be learning and honing my skills for a lot longer (you can write Scala in an imperative fashion, but you don't leverage its strengths doing that).
I suspect the same would apply when (say) migrating from a relational database to an OO database, vs. a MS-SQL/Oracle migration.
It does depend, mainly on how closely the language resembles a language they already know, as well as individual abilities at picking up new things. Moving between similar languages like C++, Java, and C# is very easy. Similarly, moving from (say) Win32 to MFC to .net is going to be easier than from MFC to MacOS.
Moving from C to C++ is likely to take longer, as the programmer has to learn OO methodologies. Moving from C++ to Perl or ML could take a lot longer!
However, you usually don't need to know much to get started. Moving from C++ to C# can be done in a few hours reading (on the main differences) and then you can start writing (or modifying existing) code. That's because (a) you already know how to do OO programming, and (b) 95% of the syntax is identical.
But the main thing it depends on is your definition of "proficient". With similar languages, you will be able to write good code within a few days (an algorithm is usually failry language independent), but it usually takes months or years to become truly "proficient" in a language or large library.
So I'd say as a rule of thumb, "up to (a reasonable) speed" in a few weeks, but you might see silly "mistakes" or inefficiencies in their code for months/years until they learn all the little tricks of the language.
In the case of people learning OO, usually it seems to take a few days to get the basic concepts, and then at about the 2 year mark, a moment of epiphany occurs where the programmer suddenly relises that they truly "get" it. (I guess this is when your brain starts thinking fluently in OO rather than trying to think procedurally and then translate that into an OO aproach)
In our environment (US health care revenue cycle) it is more than just learning and becoming proficient in the language or technology stack we use to deliver our solutions to our customers. The developer also has to understand the problem domain. We work with entities that often don't document the behaviors of their systems well-enough for external entities (us) to communicate with to get the data that our customers we want. Our developers are forced to think beyond the specs to build a functioning system.
There is also the inevitable "It doesn't work; fix it" problem report from the customer support staff. Frequently the problem isn't a defect in our software; it is an issue with other entities with which our software communicates. Our developers have to be able to identify (and sometimes prove) that it isn't our software so that our business analyst-types can go to that other entity and explain the issue in a way that will get them to resolve the problem.
You were expecting this answer but it all depends on the person/programmer. I have been in a situation where two equally skilled programmers had to pick up something new, one got it right away, while the other one took some time. Previous exposures to other technologies are also a factor.
Personally, in regard to something new, I budget my time to learning everything about it every chance I get. It would take about 6 months to fully be comfortable.
Hope this helps.
I am talking about something fairly substantial, like a language, or a framework / product
that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you
are using, but know something similar, how much time do you budget for them to "get online."
Twenty-three work days, six hours, forty-three minutes, and seventeen point nine seconds.
What do you do in your projects? How do you budget the time to account for learning.
I think these questions are better!
Try to find an easy project in the new technology, and have them do that. If possible, have the person start by fixing bugs, then adding small features.
Learning is incremental. One can continue learning details of, say, C++ syntax throughout one's life. When one is an "expert" in a topic, it just means that the gains from learning more in that topic are growing smaller.
+1 for it depends.
It depends on such things as
the attitude and capabilities of the person learning it
is the problem area programming/paradigm well understood by that person
the similarity of the new technology / language to other technologies he/she does know
the consistency of the new technology / language in its interface (API, grammar, etc...)
what is proficient (knowing just the language, or als the basic library, or also runtime behaviour (interactions with underlying technology))
Having said that, in my experience a smart person learning a new language/technology will quickly be more productive than other people with more experience in that language/technology.
See Peter Norvig's Teach Yourself Programming in Ten Years for the related question of how long does it take to become proficient in programming.
It so completely depends on whether you already know languages that are similar to the new one, and know something about the problem domain the new language is suited for. I'd say don't expect to be reasonably proficient in less than 3-6 months, but again, it depends.
To take one example I implemented a PHP/MySQL web application a couple years ago (total effort was about 6 months). It was my first reasonably large web application, and my first PHP ever. I've used relational databases, but this was also my first exposure to MySQL. MySQL came very quickly, as expected, since it's really only a dialect of a language I knew well. What surprised me was that PHP also came quickly. I realized that not only did it borrow ideas from PERL and C/C++, but the whole paradigm of coding with integrated SQL statements strongly drew on some experience I had in the 90's with, of all things, Informix 4GL.
At the other end of the spectrum, I've never really learned a functional language, so I'm trying to pick up Scala. This is going to take substantially longer, and there'll be a long period where my Scala will feel like Java in disguise, and not be that functional.
So ... it depends! ;-)
I agree that it depends.
You also run the risk that if the person knows one technology/paradigm, they will code in the new language/technology using the old practices/paradigms.
For example, I picked up Python really fast (I'm a Java/C++ guy), but it took a long time since I stopped writing Java style code in Python and started thinking functionally.
To get really good, I think there's no replacement for experience. For instance, I'm sure I can easily pick up J2EE, but the experience to built up the best enterprise systems is not something you can pick up that fast.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
This book was written in the era of time sharing systems, procedural programming, and about 30 fewer years in software engineering experience. With the improvement of things such as existing libraries, higher level languages, IDES, and the amount of documentation and examples available on the internet how much of the book still holds true?
While I can believe that adding new people to a project may initially slow it down I would think things such as unit testing, separation of concerns, and other forms of automation and design improvements would allow new members of a team to become productive faster then assumed in the book, assuming the project had good design documentation and processes in place.
I don’t have experience on large projects or with large teams so am interested to hear what those of you who do have experiences with them think.
edit:
I was wondering if new communication tools such as Wikis, instant messaging, and the internet in general decreased the time spent communicating. Based on everyones answers I would say that any increase in communications efficiency has been offset by increased complexity.
It is still as true today as the day it was written. This is because it is fundamentally about communication between people working on the same project, and none of the advances of the past 30 years have substantially changed that.
Of course, we have learned a lot in those 30 years, but all improvements in our tools and undertanding have been incremental, in accordance with Brooks' "no silver bullet" prediction.
Isn't this kind of like asking if Sun Tzu's Art of War is still applicable to warfare since we have modern equipment?
The book still has things to tell us, and I for one have experienced the problems in communication that increased team sizes bring. You should be aware that unit tests, separation of concerns etc. are not new concepts.
However, some things have not stood the test of time. I don't believe that writing ASCII flow charts in your code is a good idea, and the "surgical team" approach suggested has been tried by several people (Charles Simony at MS, most famously) and found not to work too well.
The idea isn't that "large teams don't work", it's that "throwing people/money at the problem isn't the answer". Things like unit testing, separation of concerns, etc. are doing other things rather than just throwing people at the problem. These other things allow you to carefully add more people in the right place to speed things up. If anything, the points you make support the ideas of the book.
Both of the famous Brooks writings, "No Silver Bullet" and "The Mythical Man-Month" are explanations of fundamental limitations, in programming languages and project management respectively.
While true that some of the chapters a bit farther than halfway through TMMM deal too heavily with the technology of the time, the remaining chapters are still as relevant today as they were when written.
In TMMM, Brooks follows a technique of "outline the problem", "show some false starts", and "propose my own solution". Some commentators above has pointed out that his own solutions may be considered outdated at this point, but his concise description of the problems inherent in large projects make the book worth reading.
One theme he keeps coming back to is communications overhead as a limiting factor for large teams. As a thought experiment, consider the effect of the Internet as a communications medium for programmers, and the catalyst that has been to large open source projects.
Personally, I would read the book just for the "The Joys of the Craft" section. I've never read anything that so elegantly describes what programming at it's best feels like.
(If you're curious, it's on page 7, and viewable in the amazon.com "Look Inside!" feature)
I certainly think things like "No Silver Bullet" are just as applicable today as they were decades ago, especially as we see more and more young people come into industry and think x is the latest and greatest killer language/technology and all the other technologies will die because of it.
Granted, the references to Ada or sharing computers are antiquated, but the concept of accidental and essential difficulties, buy vs. build, how code is complex by definition because we don't repeat parts, and all the other theoretical topics are still completely accurate and relevant.
The other argument for why TMMM is relevant is that, it's not really about software itself but about how programmers get things done. In this way, it's hard for it to become obsolete.
The two that stick out in my mind: "version 2" still applies and so does "adding more people not necessarily faster".
Spolsky discusses "version 2" in his own way. I do not recall if he specifically links to MMM but it is very similar in concept.
Communication has become a lot more efficient than when MMM was identified, however, I think it's all proportional. It takes a lot more to make software production ready than it did when MMM was written.
Someone said that everything in computer science was discovered in 1960 something and everything since then has been derivative.
Read TMM as a book outlining a problem, perhaps THE problem, in software engineering: its not the technology, its the people! All the improvements you mention spring from that core realization. They are all in place to solve the problems Brooks laid out. This is the book that I'm sure Kent Beck and Ward Cunningham and Alister Cockburn and Martin Fowler all read, took to heart and then began to craft their silver bullets.
I consider this to be one of the "must read" books for anyone who wants to understand the software development process.
The demand for development workforce has been growing rapidly last 40 years, and this need will not stop. Since the rate of clever (vide Joel’s “smart and get things done”) people in the population remains generally the same, educating more and more developers each year means that the average smartness of a developer is getting lower.
40 years ago, it was demigods who became developers; 20 years ago, it was a job for clever people, while now when I look at young CS students at my Alma Mater it seems that they are taking everyone who knows what a computer is.
This does not mean a disaster is coming – western world keeps importing smart people (or outsourcing work) from emerging markets or third world countries. The new development tools make it easier to develop good applications. Those factors seem to neutralize each other, making MM-M eternal true.
The Mythical Man Month is a very dated read, but the core truths still apply. Sure Brooks discusses the need for a secretary which is clearly not true today and his concept of a surgical team doesn't work well, but most of the book is still accurate. His insight that communication requirements increase along with the size of the team is still true. His observation that adding people to a late project makes it later has been born out by a lot of projects. Unit tests help a little, but they don't stop one from misunderstanding the code or asking a lot of questions. No Silver Bullet also continues to stand the test of time.
All of it. The simple fact is that software projects are nontrivial; we build our own domain knowledge, really, directly into our solutions. Domain knowledge transfer is costly, both for the transferrer and for the transferee; this has not changed. And I, for one, believe it never will, no matter what the practices and tools used. Things may get marginally better, but the simple fact is that teaching and learning are both expensive and difficult things, and there's just no way to avoid them.
The social factors are still present, because humans are still essentially the same beasts we were 50 years ago.
The technical examples are almost completely obsolete, and only make sense when you think about the 0.034 MIPS System/360 of 1964. When you've only got 8 KB of memory, suggesting that the user should be responsible for handling leap years, instead of wasting 26 bytes of system memory (as Brooks did) made sense, but today it seems downright silly. I don't know any system that small today -- your telephone is thousands of times more powerful than the most powerful OS/360 system. Today we know a lot more about usability and human-computer interaction, and making the user responsible for that category of thing is just crazy.
One programmer can now write more code/build more software than one programmer could back then, but adding the second developer is not going to produce twice as much.
If/when I get on a project with good design documentation and processes in place, I'll let you know if that improves anything.
If you think of a large project that is late, it is then most likely in crisis/panic mode, as with most things in this mode, the best answer is a half decent plan, simply throwing more resources at a crisis doesn't solve anything other than waste resources and compund the problem.
There is no substitute for scheduling (pronounced shed-uling), planning and decent management.
As with most of these "one liners" or "golden rules", consider them more guidelines (with context) than set in stone.
For a long time I've been trying different languages to find the feature-set I want and I've not been able to find it. I have languages that fit decently for various projects of mine, but I've come up with an intersection of these languages that will allow me to do 99.9% of my projects in a single language. I want the following:
Built on top of .NET or has a .NET implementation
Has few dependencies on the .NET runtime both at compile-time and runtime (this is important since one of the major use cases is in embedded development where the .NET runtime is completely custom)
Has a compiler that is 100% .NET code with no unmanaged dependencies
Supports arbitrary expression nesting (see below)
Supports custom operator definitions
Supports type inference
Optimizes tail calls
Has explicit immutable/mutable definitions (nicety -- I've come to love this but can live without it)
Supports real macros for strong metaprogramming (absolute must-have)
The primary two languages I've been working with are Boo and Nemerle, but I've also played around with F#.
Main complaints against Nemerle: The compiler has horrid error reporting, the implementation is buggy as hell (compiler and libraries), the macros can only be applied inside a function or as attributes, and it's fairly heavy dependency-wise (although not enough that it's a dealbreaker).
Main complaints against Boo: No arbitrary expression nesting (dealbreaker), macros are difficult to write, no custom operator definition (potential dealbreaker).
Main complaints against F#: Ugly syntax, hard to understand metaprogramming, non-free license (epic dealbreaker).
So the more I think about it, the more I think about developing my own language.
Pros:
Get the exact syntax I want
Get a turnaround time that will be a good deal faster; difficult to quantify, but I wouldn't be surprised to see 1.5x developer productivity, especially due to the test infrastructures this can enable for certain projects
I can easily add custom functionality to the compiler to play nicely with my runtime
I get something that is designed and works exactly the way I want -- as much as this sounds like NIH, this will make my life easier
Cons:
Unless it can get popularity, I will be stuck with the burden of maintenance. I know I can at least get the Nemerle people over, since I think everyone wants something more professional, but it takes a village.
Due to the first con, I'm wary of using it in a professional setting. That said, I'm already using Nemerle and using my own custom modified compiler since they're not maintaining it well at all.
If it doesn't gain popularity, finding developers will be much more difficult, to an extent that Paul Graham might not even condone.
So based on all of this, what's the general consensus -- is this a good idea or a bad idea? And perhaps more helpfully, have I missed any big pros or cons?
Edit: Forgot to add the nesting example -- here's a case in Nemerle:
def foo =
if(bar == 5)
match(baz) { | "foo" => 1 | _ => 0 }
else bar;
Edit #2: Figured it wouldn't hurt to give an example of the type of code that will be converted to this language if it's to exist (S. Lott's answer alone may be enough to scare me away from doing it). The code makes heavy use of custom syntax (opcode, :=, quoteblock, etc), expression nesting, etc. You can check a good example out here: here.
Sadly, there's no metrics or stories around failed languages. Just successful languages. Clearly, the failures outnumber the successes.
What do I base this on? Two common experiences.
Once or twice a year, I have to endure a pitch for a product/language/tool/framework that will Absolutely Change Everything. My answer has been constant for the last 20 or so years. Show me someone who needs support and my company will support them. And that's that. Never hear from them again. Let's say I've heard 25 of these.
Once or twice each year, I have to work with a customer who has orphaned technology. At some point in the past, some clever programming built a tool/framework/library/package that was used internally for several projects. Then that programmer left. No one else can figure that darn thing out, and they want us to replace/rewrite it. Sadly, we can't figure it out either, and our proposal is to rewrite from scratch. And they complain that their genius built the set of apps in a period of weeks, it can't take us months to rewrite them in Java/Python/VB/C#. Let's say I've written 25 or so of these kinds of proposals.
That's just me, one consultant.
Indeed one particularly sad situation was a company who's entire IT software portfolio was written by one clever guy with a private language and tools. He hadn't left, but he'd realized that his language and toolset had fallen way behind the times -- the state of the art had moved on, and he hadn't.
And the move was -- of course -- in an unexpected direction. His language and tools were okay, but the world had started to adopt relational databases, and he had absolutely no way to upgrade his junk to move away from flat files. It was something he had not foreseen. Indeed, it was something he could not possibly foresee. [You won't fall into this trap, will you?]
So, we talked. He rewrote a lot of the applications in Plain-Old VAX Fortran (yes, this is a long time ago.) And he rewrote it to use plain old relational SQL stuff (Ingres, at the time.)
After a year of coding, they were having performance problems. They called me back to review all the great stuff they'd done in replacing the home-built language. Sadly, they'd done the worst possible relational database design. Worst possible. They'd taken their file copies, merges, sorts, and what-not, and implemented each low-level file system operation using SQL, duplicating database rows left, right and center.
He was so mired in his private vision of the perfect language, that he couldn't adapt to a relatively common, pervasive new technology.
I say go for it.
It would be an awesome experience regardless of weather it makes it to production or not.
If you make it compile down to IL then you do not have to worry about not being able to re-use your compiled assemblies with C#
If you believe that you have valid complaints about the languages you listed above, it is likely that many will think like you. Of course, for every 1000 interested person there might be 1 willing to help you maintain it - but that is always the risk
But here are a few things to be cautioned about:
Get your language specification IN STONE before development. Make sure any and all language features are figured out before hand - even things that you may only want in the future. In my opinion, C# is slowly falling into the "oh-just-one-more-language-extension" trap that will lead to its eventual doom.
Be sure to make it optimized. I dont know what you already know; but if you dont know then learn ;) Nobody will want a language that has nice syntax but runs as slow as IE's javascript implementation.
Good luck :D
When I first started my career in the early 90s, there seemed to be this craze of everyone developing their own in-house languages. My first 3 jobs were with companies that had done this. One company had even developed their own operating system!
From experience, I'd say this is a bad idea for the following reasons:
1) You will spend time debugging the language itself in addition to the code base on top of it
2) Any developers you hire will need to go through the learning curve of the language
3) It will be hard to attract and keep developers since working in a proprietary language is a dead-end for someone's career
The main reason I left those three jobs was because they had proprietary languages and you'll notice that not many companies take this route any more :).
An additional argument I'd make is that most languages have entire teams whose full time job it is to develop the language. Maybe you'd be an exception, but I'd be very surprised if you'd be able to match that level of development by only working on the language part-time.
Main complaints against Nemerle: The
compiler has horrid error reporting,
the implementation is buggy as hell
(compiler and libraries), the macros
can only be applied inside a function
or as attributes, and it's fairly
heavy dependency-wise (although not
enough that it's a dealbreaker).
I see your post has been written more than two years ago.
I advise you trying Nemerle language today.
The compiler is stable. There are no blocker bugs for today.
The VS integration has a lot of improvements , also there is SharpDevelop integration.
If you give it a chance, you won't be disappointed.
NEVER EVER develop your own language.
Developing your own language is a fool's trap, and worse it will limit you to what your imagination can provide, as well demanding that you work out both your development environment and the actual programme you're writing.
The cases in which this doesn't apply are pretty much if you're Larry Wall, the AWK guys, or part of a substantial group of people dedicated to testing the boundaries of programming. If you're in any of those categories, you don't need my advice, but I strongly doubt that you're targeting a niche where there is no suitable programming language for the task AND the characteristics of the people doing the task.
If you are as clever as you seem to be (a likely possibility), my advice is to go ahead and do the design of the language first, iterate a couple of times over it, ask some smart fellows you trust in smart programming language related communities about the concrete design you came up with and then take the decision.
You might realize in the process of creating the design that just a quick hack on Nemerle would give it all you need, for example. Many things can happen just when thinking hard about a problem, and the final solution might not be what you actually had in mind when beginning the project.
Worst case scenario, you're stuck with actually implementing the design, but by then you will have it proof read and mature, and you'll know with a high degree of certainty that it was a good path to take.
A related piece of advice, start small, just define the features you absolutely need and then build on them to get the rest.
Writing your own language is not a easy project.. Especially one to be used in any kind of "professional setting"
It is a huge amount of work, and I would doubt you could write your own language, and still write any big projects that use it - you will spend so long adding features that you need, fixing bugs, and general language-design stuff.
I would strongly recommend choosing a language that is closest to what you want, and extending it to do what you need. It'll never be exactly what you want, but compared to the time you'll spend writing your own language, I would say that's a small compromise..
Scala has a .NET compiler. I don't know the status of this though. It's kind of a second class citizen in the Scala world (which is more focused on the JVM). But it might be a good tradeof to adopt the .NET compiler instead of creating a new language from scratch.
Scala is kind of weak in the meta-programming department ATM. It's possible that the need for metaprogramming is somewhat reduced by other language features. In any case I don't think anyone would be sad if you were to implement metaprogramming features for it. Also there is a compiler plug-in infrastructure on the way.
I think most languages will never fit all of the bill.
You might want to combine your 2 favourite languages (in my case C# and Scheme) and use them together.
From a professional point of view, this probably not a good idea though.
It would be interesting to hear some of the things you feel you can't do in existing languages. What kind of projects are you working on that can't be done in C#?
I'm just curios!