Bleeding edge vs field tested technology. How will you strike a balance [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have been pondering about this for some time. How do you pick a technology ( am not talking about Java vs .Net vs PHP) when you are planning for a new project /maintaining an existing project in an organization.
Arguments for picking the latest technology
It might overcome some of the limitations of the existing technology ( Think No SQL vs RDBMS when it comes to scalability). Sometimes latest technology is backward compatible and only get to gain the new features without breaking the old functionality
It will give better user experience (May be HTML 5 for videos, just a thought)
Will cut down development time/cost and make maintenance of the code base relatively easy
Arguments for picking field tested technology/against picking a bleeding edge technology
It has not stood the test of time. There can be unforeseen problems. convoluted solutions might lead to more problems during maintenance phase and the application might become a white elephant
Standards might not yet be in place. Standards might change and significant rework might be needed to make the project adhere to standards.Choosing the field tested technology will save these efforts
The new technology might not be supported by the organization. Supporting a new (or for that matter a different technology) would require additional resources
It might be hard to get qualified resources with bleeding edge technology
From a developer perspective, I do not see a reason not to get hands dirty with some new technology (in your spare time) but he/she might be limited to open source/free ware/developer editions
From am organization perspective, it looks like its a double edged sword. Sit too long in a "field tested" technology and good people might move away (not to mention that there will always be people who prefer familiar technology who refuse to update their knowledge). Try an unconventional approach and you risk overrunning the budged/time not to mention the unforeseen risks
TL;DR
Bottom line. When do you consider a technology mature enough so that it can be adopted by an organization ?

Most likely you work with a team of people and this should be taken into consideration as well. Some ways to test/evaluate technology maturity:
Is your team and management receptive to using new technology at all? This might be your biggest barrier. If you get the sense that they aren't receptive, you can try big formal presentations to convince them... or just go try it out (see below).
Do some Googling around for problems people have with it. If you don't find much, then this is what you'll run into when you have problems
Find some new small project with very low risk (e.g. something just you or a couple people would use), apply new technology in skunkworks fashion to see how it pans out.
Try to find the most mature of the immature. For instance if you're thinking about a NoSQL type of data store. All of the NoSQL stuff is immature when you compare against RDBMS like Oracle that has been around for decades, so look at the most mature solution for these that has support organizations, either professionally or via support groups.
Easiest project to start is to re-write an existing piece of software. You already have your requirements: make it just like that. Just pick a small piece of the software to re-write in the new technology, preferably something you can hammer at with unit/load testing to see how it stands up. I'm not advocating to re-write an entire application to prove it out, but a small measurable chunk.

A few rules of thumb.
Only use one "new" technology at a time. The more new things you use, the greater chance of there being a serious problem.
Make sure there is an advantage to using it. If that cool, new technology does not give you some advantage, why are you using it?
Plan for the learning curve. There will be aspects of the new technology that you do not know about. You, and your team, will have to spend more time learning about them then you think you will.
If possible try the new technology on a small less important project first. Your companies accounting system is not the best place to experiment.
Have a back up plan. New technologies don't always turn out to be worth it. Know when you are in a "coffin corner" and it is time to bail out.

There's a difference between "field tested" and "out-of-date." Developers (myself included) generally prefer bleeding edge stuff. To some extent, you have to keep your development staff happy and interested in their jobs.
But I've never had a customer unhappy with field tested technology. They are generally unaware or unconcerned about the technology that is used in producing a product. Their number one priority is how it works in their daily interactions with it.
When starting a new project, two questions come to mind in evaluating if I should move to a new platform:
1) What benefits do I get from going to the new platform. If it offers me a dramatically reduced development time or significant performance increases for the users, I will consider a semi-bleeding edge technology.
2) What risks are associated with the new platform. Is it likely that there are some scenarios that I will encounter that aren't quite worked out in the new platform? Is it likely that support for this new platform will fizzle out and I'll be left holding the bag on supporting a deprecated environment? Are there support channels in place that I can use if I get stuck at a critical juncture of my project?
Like everything, it's a cost/benefit analysis. Generally speaking, while I always learn and train on new technologies, I won't build something for a client using a technology (environment, library, server platform, etc) that hasn't been widely adopted by a large number of developers for at least 6-12 months.

That depends on the context. Each organisation has to make its own decisions. The classic literature on this topic is Crossing the Chasm by Geoffrey A. Moore.

If the company/community developing the product is known for good products then I'm very happy to put a safe bet on their new products.
For example I would be quite happy to develop on Rails 3 or Ruby 1.9, as I am quite sure they will be fine when finalized.
However I wouldn't write much code in superNewLang untill I was convinced that they had a great, well supported product, or they had a feature I couldn't live without.
I will tend to get the most trusted product, that suits all my needs.

You got to ask yourself only one question ... do I feel lucky ?
Where's the money ?
Do you get to profit big and fast enough even if tech X is a flop ?
If not, does new tech bring higher perf for a long time?
Like 64-bit CPU, Shader Model 4, heavy multi-threading
Do you see a lot of ideological trumpeting around it
"paradigm shift" blurbs etc. - wait 2-8 yr till it cools off and gets replaces :-)
Is it bulky and requires 2x of everything just to run?
let your enemy pay for it first :-)
Can you just get some basic education and a trial project wihtout risking anything?
might as well try unless it looks like a 400 lb lady who doesn't sing :-)
There is no general answer to such question - please got to #1

Related

What's your leadership style in IT? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently got promoted to be the Project Manager/Supervisor. What do you think the leadership style a managerial role in Programming Dev't should have?
What's your style?
Hands-off, servant leadership, unofficial or "tribal" leadership over traditional management, seem to be all the rage these days.
Basically getting out of the way and allowing the team to get their job done seems to make sense to me, but it all depends on the culture.
I would say an effective manager would already have a style, and know how he wants to work, whereas a less effective manager would probably learn from other senior managers and simply emulate "however things get done around here". Actually in a lot of places the latter is the only choice.
If you have the freedom to do things the way you want, I would probably prefer to borrow ideas from the agile/lean camp than the more traditional PMI/Prince2/PMBOK camp, but it all depends really.
The job of a manager is to get out of the way and let the developers do their jobs. If they encounter an obstacle it is your task to remove the obstacle.
I do not believe to simple management guidelines. In an ideal world, the job of a software manager would be to just provide food, computers, electricity and salaries, but we are hardly in an ideal world.
In a way, being a manager is a highway to frustration. There are few opportunities for a direct contribution to the project, you spend most of the time on planning, meetings, writing reports, and proposing future projects. In a nutshell, you have the responsibilities, while they have the joy of building things. In order to avoid quitting the job due to lack of fun, one needs to find a proper motivation which would justify the troubles.
Now, different people are motivated by different things. Some people like to participate in group efforts, some like the achievement in building things which can't be built by a lone enterpreneur, some like the power, some like the money. I think that a management style should be tailored to the intrinsic motivations of all of the involved parties. For example, it is useless to try to motivate your coworkers with money if they are primarily interested in building cool things (and vice versa).
A key competence in managing people is being able to address conflicts as early as possible. The conflicts range from trivial (X keeps committing buggy code to the repository) to critical (we need to hurry up in order to hit a deadline). I think it is very important to be able to express such concerns frankly and clearly, regardless of the managements style. Thus, at the end of day, oral communication capacities would be at least equally important as the management style.
I dont think it matters alot what style you choose. When leadership is "broken" it usualy is due to more basic things not done right.
Consistency: stick to your style unless you are sure it doesnt work out.
Honesty: Might seem obvious but when "fooling arround" too much with carrot and stick it can get out of control
Respect: Tech-Guys have all different characters but grasping what they value is easy - being passionate about technology and using it in professional way will open hearts. Waveing about your iphone showing off fancy looking but technologicaly trivial apps might result in the opposite ;)
Lead by example: You techs do extra hours? You do extra hours too!
Motivation: You dont need a jungle camp every 3 weeks but you can still help everyone to feel better about seeing each other more often than the family. Implement a friday afternoon beer-session if that is acceptable (be strickt about times though, no drinking before 6pm for exmaple). Show interest in what people are working on even if you are not part of operations. When working on abstract subjects people can have a hard time to put into relation what value they add to the company and to the team. When "in the jum" programmers particularily can become like lone astronauts - Having a broader understanding about your business you will need to remind people about the mission (though thats PM tasks mostly, but no PM is perfect too)).
In the end you are good leader when your team says "WE did it!"
There are plenty of methods with catchy names, but in general I prefer the management style to be lightweight and encourage communication.
I suspect a lot of us have had the experience of having to spend more time filling out forms than actually developing. Than is both frustrating and unnecessary. Controls are important, but a new form is not the solution to every managerial problem.
As far as communication goes, many managers seem to believe that it will work if everyone reports up to them and then they send the collected information back down. That can really lead to disaster. The team needs to communicate with each other well and often.
Finally, I'd like to throw in that as tempting as it is to take a new resource for a project and get them developing as quick as possible, I think it will always work out better in the long run to hold off and get them properly trained and oriented to the project.
My style is a combination of Attilla the Hun, Napoleon Bonaparte and Nelson Mandela. Whatever you do, don't try to adopt my style.
More seriously, to be a good leader you have to develop your own style and you have to integrate that into the culture of the organisation you work in. So, the answer to your question must start with asking yourself some penetrating questions and giving honest answers to them. You must also take some time to understand the traits of the individuals in your team and figure out what makes them tick and how to motivate them as individuals. What works with one may not work with another.
And, while I'm writing, I'll direct a passing kick at the respondents who suggest that it is a manager's job to get out of the way and let the team work: it's the manager's job to manage, you have people you work for who have certain expectations of you and you have to pay attention to them as well as to the losers on your team.
I write 'losers' because you have just been promoted and they haven't. Sure, you have to lead them to great achievement but you won't do that by keeping out of their way, you'll do it by leading them in the right direction, with the right mix of carrot and stick. Oh, and don't let them know that you think they are losers, it will upset them.
First of all; if you try to adopt a "style" that's not your own, you will most likely fail. You basically just have to be yourself! (That's probably why you got promoted in the first place) That said, there are some theorems to embrace, one beeing "you can always be a better leader" ;) I guess that's part of why you posted this question. My advise is to support your co-workers, and remember that it's your job to make them as good as possible. Try to keep yourself on top of all that happens within the project and encourage communication within the team. Agile style development helps with that. Also, try to put yourself in your co-workers shoes and try to imagine what they expect and want from you. Best of luck
There is no one "style" that you can or indeed should focus on. The reality is that you are now a people manager and people are all different. You need to learn to recognize the differences in the people you are managing and respond accordingly. This is a technical role, so if you have some technical understanding then this will assist with gaining respect of the team.
Some people need to be told what/how, some people need a gentle prod and some need full ownership of a task. Learning to spot the differences is where you need to apply yourself.
Typically people fall into 4 distinct camps with different names depending on the management course of the day :)
Beginner, highly motivated, not much experience, needs a more directive approach
Learner, more capable, but may be experiencing frustration, needs coaching
Performer, very capable but may lack confidence, needs supporting in their approach
Achiever, capable and committed, needs delegation of tasks
Management 3.0 Leading Agile Developers, Developing Agile Leaders by Jurgen is a book dedicated to answering this questions. http://www.management30.com/. His home page is here http://www.jurgenappelo.com/
In his book and class, he refers to Martie, the Management 3.0 model. It is composed of
Energize People
Empower Teams
Align Constraints
Develop Competence
Grow Structure
Improve Everything
An excellent introductory presentation can be found here: http://www.slideshare.net/jurgenappelo/what-is-agile-management
Jurgen's two key takeaways.
A software team is a self-organizing system. Support it, don't obstruct it.
Agile managers work the system around the team, not the people in the team.
Enjoy.

How long does it take an experienced programmer to become proficient with a new technology / language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I realize that the question is likely o get a lot of "it depends", but I am curious anyway. When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you are using, but know something similar, how much time do you budget for them to "get online."
I am talking about something fairly substantial, like a language, or a framework / product that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
In my own experience (10 years of experience, including a fair amount of consulting, so learning new technologies is par for the course), it takes me about three to six months of experience to become proficient at a new technology, and about a year to feel like I am approaching expert level where I know all the basics and medium-difficulty issues, along with a few areas very well.
What do you do in your projects? How do you budget the time to account for learning.
It doesn't only depend on the individual involved -- it crucially depends on the specific technology as well as the individual's background; certain technologies, esp. languages, are just harder and slower to get into. I've seen world-class Java gurus with zero previous exposure to C++ take many months, say on the order of six or so, to be fully productive in C++; vice versa (world-class C++ guru with zero previous exposure to Java) I've seen take about 2-3 months; again for extremely experienced and skilled programmers with no previous exposure to dynamic languages, being fully productive in Python can be expected to take 3-4 weeks. In each case I'm talking about 100% full-time involvement in the relevant technology, by a programmer in the world's top one percent in terms of skill and experience, within a team having several other programmers of that caliber who also gurus in the specific language in use.
Factors that can shorten the time are previous exposure to "similar" languages/technologies, e.g. a solid background in C makes C++ slightly faster to learn, solid background in C# helps with Java, solid background in Ruby or Perl helps with Python. Factors that can lengthen the time include lack of suitably experienced teammates, not being 100%-immersed in the "new thing", and psychological resistance (not really wanting to do it with all one's heart!-).
I've focused on programming languages for my examples, but some technologies can be even harder, i.e., take longer to master -- if you've never written embedded real-hard-time programs (no dynamic allocation of memory allowed, proofs of upper bound on response time required of all function) even six months might not suffice; some application areas require mastery of application domains that, all on their own, can take even longer (if to understand at all what's going on, and therefore be fully productive, you need the equivalent of a BSc in Psychology, or deep knowledge of the Law, or a CPA's qualifications, etc, well, each of those takes years on its own!).
I don't think the language as such is the issue, rather the programming paradigm it encompasses.
e.g. earlier this year I tried C#, coming from a Java perspective. That was all very straightforward. However, I'm now trying Scala. Because of the functional aspect, I expect to be learning and honing my skills for a lot longer (you can write Scala in an imperative fashion, but you don't leverage its strengths doing that).
I suspect the same would apply when (say) migrating from a relational database to an OO database, vs. a MS-SQL/Oracle migration.
It does depend, mainly on how closely the language resembles a language they already know, as well as individual abilities at picking up new things. Moving between similar languages like C++, Java, and C# is very easy. Similarly, moving from (say) Win32 to MFC to .net is going to be easier than from MFC to MacOS.
Moving from C to C++ is likely to take longer, as the programmer has to learn OO methodologies. Moving from C++ to Perl or ML could take a lot longer!
However, you usually don't need to know much to get started. Moving from C++ to C# can be done in a few hours reading (on the main differences) and then you can start writing (or modifying existing) code. That's because (a) you already know how to do OO programming, and (b) 95% of the syntax is identical.
But the main thing it depends on is your definition of "proficient". With similar languages, you will be able to write good code within a few days (an algorithm is usually failry language independent), but it usually takes months or years to become truly "proficient" in a language or large library.
So I'd say as a rule of thumb, "up to (a reasonable) speed" in a few weeks, but you might see silly "mistakes" or inefficiencies in their code for months/years until they learn all the little tricks of the language.
In the case of people learning OO, usually it seems to take a few days to get the basic concepts, and then at about the 2 year mark, a moment of epiphany occurs where the programmer suddenly relises that they truly "get" it. (I guess this is when your brain starts thinking fluently in OO rather than trying to think procedurally and then translate that into an OO aproach)
In our environment (US health care revenue cycle) it is more than just learning and becoming proficient in the language or technology stack we use to deliver our solutions to our customers. The developer also has to understand the problem domain. We work with entities that often don't document the behaviors of their systems well-enough for external entities (us) to communicate with to get the data that our customers we want. Our developers are forced to think beyond the specs to build a functioning system.
There is also the inevitable "It doesn't work; fix it" problem report from the customer support staff. Frequently the problem isn't a defect in our software; it is an issue with other entities with which our software communicates. Our developers have to be able to identify (and sometimes prove) that it isn't our software so that our business analyst-types can go to that other entity and explain the issue in a way that will get them to resolve the problem.
You were expecting this answer but it all depends on the person/programmer. I have been in a situation where two equally skilled programmers had to pick up something new, one got it right away, while the other one took some time. Previous exposures to other technologies are also a factor.
Personally, in regard to something new, I budget my time to learning everything about it every chance I get. It would take about 6 months to fully be comfortable.
Hope this helps.
I am talking about something fairly substantial, like a language, or a framework / product
that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you
are using, but know something similar, how much time do you budget for them to "get online."
Twenty-three work days, six hours, forty-three minutes, and seventeen point nine seconds.
What do you do in your projects? How do you budget the time to account for learning.
I think these questions are better!
Try to find an easy project in the new technology, and have them do that. If possible, have the person start by fixing bugs, then adding small features.
Learning is incremental. One can continue learning details of, say, C++ syntax throughout one's life. When one is an "expert" in a topic, it just means that the gains from learning more in that topic are growing smaller.
+1 for it depends.
It depends on such things as
the attitude and capabilities of the person learning it
is the problem area programming/paradigm well understood by that person
the similarity of the new technology / language to other technologies he/she does know
the consistency of the new technology / language in its interface (API, grammar, etc...)
what is proficient (knowing just the language, or als the basic library, or also runtime behaviour (interactions with underlying technology))
Having said that, in my experience a smart person learning a new language/technology will quickly be more productive than other people with more experience in that language/technology.
See Peter Norvig's Teach Yourself Programming in Ten Years for the related question of how long does it take to become proficient in programming.
It so completely depends on whether you already know languages that are similar to the new one, and know something about the problem domain the new language is suited for. I'd say don't expect to be reasonably proficient in less than 3-6 months, but again, it depends.
To take one example I implemented a PHP/MySQL web application a couple years ago (total effort was about 6 months). It was my first reasonably large web application, and my first PHP ever. I've used relational databases, but this was also my first exposure to MySQL. MySQL came very quickly, as expected, since it's really only a dialect of a language I knew well. What surprised me was that PHP also came quickly. I realized that not only did it borrow ideas from PERL and C/C++, but the whole paradigm of coding with integrated SQL statements strongly drew on some experience I had in the 90's with, of all things, Informix 4GL.
At the other end of the spectrum, I've never really learned a functional language, so I'm trying to pick up Scala. This is going to take substantially longer, and there'll be a long period where my Scala will feel like Java in disguise, and not be that functional.
So ... it depends! ;-)
I agree that it depends.
You also run the risk that if the person knows one technology/paradigm, they will code in the new language/technology using the old practices/paradigms.
For example, I picked up Python really fast (I'm a Java/C++ guy), but it took a long time since I stopped writing Java style code in Python and started thinking functionally.
To get really good, I think there's no replacement for experience. For instance, I'm sure I can easily pick up J2EE, but the experience to built up the best enterprise systems is not something you can pick up that fast.

What's the good time balance between designing an application and coding it? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The question might seem trivial, but it's an actual problem: when you're working on a project, do you do any kind of architecture design before actually starting coding? Do you spend much time working together with a customer to get a detailed specs/usecases/mockups?
During coding, do you alter those architectural decisions made before? Do you go back to the customer with new set of specs/usecases/mockups?
I'm wondering, what's a good balance between all those non-coding actions and coding itself, from your experience?
Update:
Ok, so from the anwsers so far it seems like there are 2 approaches:
design early, then sit and code to avoid late fixes
minimize the design alone part, instead do iterative development (agile methodologies seem to prefer it that way).
I guess which way to go depends on the project, team and customer... am I right?
That which minimises the total time spent ;-)
It heavily depends on the kind of project, but generally speaking it's better to "waste" time over-designing and specifying requisites than finding out later that something was wrong and come the whole way back to fix it.
I read something about quantitative measurements of the impact of poor design decisions in "The Mythical Man-Month" or maybe in a book called something like "Software Requirements Pro Practices" from Microsoft Press, I think the time wasted in a late fix (near product delivery) was about 10x than in early stages.
If you do agile, design and coding are the same thing. In my experience it is good to pair program during the very first stage of the project...
Have a look at scrum, agile and waterfall. This is related to project management not programming per se.
Architecture also becomes easier once you have built enough applications within a domain or a platform. In PHP, if you use Joomla, Symfony or codeigniter then your scaffolding and architecture is already in place. Same for ASP.NET MVC.
My personal experience tells me that you should consider different factors. There's no silver bullet. My personal list follows, grown mostly by experience.
If you are developing something that is well known in details, the development team is sparse and with difficulty to communicate efficiently all together, the team has strong or huge dependencies towards the work of other teams, and what you are developing has a fundamental long term importance that will be difficult to change in the future (eg. file formats), go for a very long design phase, akin to a waterfall model. Also, you should spend a lot of design if you plan to develop a rather complex application, and you have to deeply consider all the possible interactions between features before coding. Coding takes very little time compared to design. Also, you should consider this if it somehow important to keep efficient record of how the application behaves from a very high level point of view, and if your team tends to be highly unstable, so that your knowledge stays on paper, rather than in people's brain.
if you have to implement something brand new and to do research on, you want feature as soon as possible, growing the application from fast feedback, you have a pool of geeks that work in the same room, are very committed to your cause, love programming and they are passionate to share and build together, go for agile methods.
if you are in between to the previous two cases, go for an iterative approach. I normally choose a 3 months schedule. When I code alone, I work agile-like, mostly because I have to cope with frequent disruption, so I add feature by feature. However, I release iterative, namely I don't plan to do an official, stable release before the third iteration. I want space to learn the field, do mistakes, and correct them before committing to maintain some stupid choice.
if you code in academia, you are screwed, because you have some of the issues in 1 without the manpower to accommodate them, and some of the issues in 2 without the easy communication required by agile methods.
roughly 50/50. whenever ive analysed my project schedules, it turns out about 50% of the time goes into design, project management, quality control, and auxiliry tasks. the remaining 50% is coding. if i dont see that 50/50 ratio, i worry.
mind you, this is using traditional waterfall model (which is more suited to custom-app development). agile methods are better for shrink-wrapped software in my opinion.
I would say it's roughly 50/50, no matter the "methodology" or project type. It only varies in how those 50% design are distributed. And that may depend on the project, but most of all it depends on the people who do the work, and how they are "wired". It's more a matter of psychology than methodology.
Some people (I'd say the more cautious characters) need a more detailed mental map before they start coding. If they don't have that map out of prior experience, they will need more "investigation" time up front.
Others yet like to just "jump in" into coding with only a rough mental map, and work out the details as they go.
Somewhere in between is to do the elaboration via spikes and prototypes, and develop the "big picture" on top of that running code. For me personally this tends to yield the best results, and the least waste. (After all, prototyping is, in a way, a test-first approach applied on solution ideas. You get an idea, test it out in a spike or prototype, then implement/integrate it with the main code base.)
My advice is: Find out the style that feels best to you personally, and stick to it. That's going to be pretty sure the style you are going to be most effective with.
Those two things are tightly coupled. Well at first stage, you are definitely will spend some time to make design decision. Then you will have to start coding and almost in all cases you will came up with some improvement decision for your previous design.
After all it will depend on delivery date and how much time you have at all and then to decide accordingly how you going to balance it. In general you make a startup design and then during coding you will update and change it. Also is a good practice to deeply involve your customer in design decision during development stage to force him be aware of it and how much of your time you will spend on each change.
The longer the period between when you write your specification and the time you start coding will increase the chance that requirements will change. So, to answer your question, as soon as possible....
If your suffering from too much requirement creep then I would suggest implementing smaller iterations of releases (if possible) and then creating new requirements/specifcation documents for each of these samller phases.
If you can't do this.... make sure you have a good change management process sin place.
My google-fu is failing drastically, but I recently read something to the effect of:
"Spend 6 months coding, 6 months designing and 6 months testing. The good news is, they're all the same 6 months."
It's important to design enough to have a map of what you are trying to code, and how it relates to the rest of the system. You can't just code most large projects - they're too big, and usually involve multiple components. I've done that when I was young, and you end up with a big ball of mud, or stay up all night for a week refactoring it.
What I tend to do now is design down to the package level, and assign roles to components. On large systems getting to the component selection stage can take several months, and involve some trail and prototyping coding.
Then the APIs and implementations of each package are evolved, based on what messages the functionality require, and how the clients of the packages evolve to cause the emergence of further requirements or constraints. I usually evolve an API by designing a pure interface (by writing the code for it) with unit tests for each known use case, then implement it. So there is some writing of code involved in designing - the best representation of the API is usually the code and inline documentation, and it's easiest to confirm that the client can perform the actions required to satisfy a use case ( and the code to do so is not excessively complex ) by writing code which exercises the API in that way, and that code trivially becomes a unit test for the implementation of the API when it arrives. But the code written during 'designing' isn't the code which supplies the implementation of the API. For APIs with low coupling ( so can be changed without breaking too many clients ), I'll switch between designing and implementing modes rapidly; for ones with higher coupling, I'll typically publish the API and use-case examples for peer review before committing too implementing them.
As aleemb said, this really is a project management question. I suggest you read up on several methodologies, find the useful and not-so-useful parts of each, and evaluate your own circumstances (team size/experience, customer engagement and commitment levels, what's done in your organization, schedule/budget, etc.) and come up with the best schedule you can. It really all just depends on your specific circumstances.
Think about how many people are going to be involved in writing the software.
If it's just a one-developer job, maybe take a smaller percentage for design. If you're going to have 30 people working on it, you probably want a lot bigger fraction for the design.
Getting teams of developers to write software is much like partitioning software up across multiple CPU's - you are going to get the best return for added CPU (read 'developer') when you can minimize the necessary communication between them. You sure don't want to get 10's of k-loc into your project before the developers start discussing architectural issues.
Now you could probably also make the case that, when you do a better job with the design phase, the coding will actually take less time and be less painful. Measure twice and cut once, and all that.
Also, you probably should think about the likelihood of the project being 'put on hold'; design artifacts have much better shelf life than immature code.
Depends on your chosen methodology.
Traditionally with Big Design Up Front or Waterfall you spend 90% of the time designing and 10 % of the time coding. You then spend another 90% of the time handling all the changes that the initial design missed. And another 90% of the time chasing bugs.
With modern Agile development you spend 10% of the time designing and 90% of the time coding. then another 90% handling all the changes that the customer representative forgot to mention and another 90% of the time chasing bugs.

XP vs Traditional good project management [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have been in the IT industry for 10 years now but have worked in "traditionally" managed project teams (both well managed and badly managed ones).
I have heard of the "new" scrum or XP type of project management and yearned to be part of one (as s/w folks we always like anything new I guess) but have not got an opportunity.
My question is this - what are your experiences in moving to the "new" way - was it significantly better or worse or not any different? Has there been any project success rate improvement when using XP way of development or it is same as any well managed traditional projects?
This should not be a political question but just your experiences as you have moved to the new world or experienced at least once and back.
Thanks in advance
Before I ever heard of XP, I had a really good manager (Mike) at an early job I had. He was used to managing engineers and transitioned to managing software. After a few bad working experiences I looked back at his style versus typical project management I had before and after working with him.
Met with everyone at least once a day but gave us space to work
Used a whiteboard with two columns, people working and what they are working on anyone could look at that board and see if something had been done or was being done
Had everyone cross-train. I learned rcs and then cvs there and how to use make files
Ran productive "post mortum" when a task was completed. He would ask question like "would it have helped if X?" or "next time, can we try to..."
Kept everyone working on short tasks and managed our time so we always working on something but never had a ton of stuff piled up
Mike did everything on paper. He would keep notebooks and index cards with him. He insisted that anything asked of him by management be converted into manageable tasks, often written on note cards. He refused to have anyone work on anything that couldn't be clearly explained or had a clear objective. He would ask the VPs "what do you mean by faster?" "What kinds of metrics are the reports meant to show?" "Why should this be a priority?" He seemed to have near infinite patience in writing out what needed to be done and what was meant by "done"
When I first read the XP book, I was amazed by how much was familiar as "the way Mike worked"
It seems that Agile is just about implementing a set of best practices and evaluating how they work in your environment. When they don't work, change them. When they do work, stick to them.
I think the real problem with traditional project management is that more often than not, it doesn't really exist. I'm amazed by how many shops claim to use RUP or Code Complete or even Agile and don't actually have anything recognizable as project management. Sure, there are meetings. And people called project managers. But ask a simple question like "what has been done on project X" or "what is left to do on project Y" and no one has an answer. They have to dig though emails or point to a comically inaccurate MS project file.
If a person claimed to be on a diet and couldn't answer questions on what they were eating or how they were exercising; would you accept that they were really on a diet?
You take your old baggage with you when you go. Meaning that any project management bad practices you had before will still linger.
However, I will say that things improved greatly when we began to close the loop between us and the customer. Greater and more frequent feedback and prototyping with the customer means far fewer moments of the customer saying, "This is not what I wanted."
I've used (a slightly modified) Scrum before at work and here are my thoughts:
The daily meetings and burn-down provided motivation to make progress on tasks.
Our manager could talk to colleagues across the pond and show them "this is what we're working on this month."
You knew exactly what tasks you needed to get done, and had already estimated the time required to complete.
When priorities changed (new tasks, important bugs added), there was a well-defined process to handle adding them to the sprint or simply pushing them to the backlog.
These are lovely answers, but I think everyone's confusing project management with development/design methodologies.
I'm on a team that started Scrum a few months ago and we seem to be getting things done faster and with much less "waste" (projects that are scrapped). Just my observations from our small team (4 devs).
I've found the overall move to Agile/XP practices very positive, in many ways it front loads quality into the project/development process. You'll need buy-in from management and from the team to really see success...a few suggestions:
trial any change with a small project (2-3 people)
understand what areas your current team can most improve (quality? productivity? time-to-market?) and incorporate a few Agile/XP/Scrum (what ever) processes in...don't incorporate them all in at the same time and understand which processes address which issues prior to any change
if possible - track those areas you're looking to change and compare to another project running at the same time (the mere focus of improving something often is enough to improve it ,there's a study/term for this, but I forget what it is)
sometimes you'll see a dip in performance as you begin a new process, this is part of the learning curve
never assume that a good change today will remain a good change tomorrow, always review your project areas and be ready to change any process at any time
no change remains good forever, just like refactoring code, refactor your processes
ensure you get buy in from the team and management, you can't force success
I like some of the things the agile approaches do, but I also value some of the things traditional approaches do.
Both can work, as can a mixture of the two, which is what I find works best for my team now. I have implemented incremental development and it really helps us; iterative development is a little harder and we're still working on that. However, we have a variety of constituents, and many of our stakeholders (and PMs) prefer traditional artifacts and milestones. So we have to keep finding the right balance.
I have also found that even more important than the methodology is the people implementing it. Good people find a way to do good work and get things done regardless of the methodology, although certainly the methodology can have effects on efficiency (and morale :) ). Poorly aligned resources, however, can use the finest methodology and find ways to deliver poor results.
For developers, the great lessons of XP & Co. are shorter release cycles, and a more evolutionary approach - in the sense that change of requirements is accepted as a natural part of any project. Also, Customers suggest solutions, but designers and developers need to understand the problems.
Lessons for managers: Developers are not exchangable spec-to-code-converters, their individual strengths and weaknesses can make a productivity difference of 10 or more for a given topic. Knowledge and experience are the most valuable skills in your team, and developers can teach each oterh. Managers need not understand what developers do in order to enforce desired results.
XP & Co. are usually mixing solutions to these with the problem to make a company change. The heroic XP consultant singlehandledly saving a doomed, delayed and derailed project acts as large part as a buffer between development and management. But if you are looking at what to learn, you have to separate these aspects.
What I learnt in the recent years is that bugs aren't a personality fault, and that the sky doesn't fall when specs change. I've learnt that while design errors are still the most expensive to make, there isn't a single "perfect" design. Instead of getting one thing right we need to implement safeguards that of all the many details none goes wrong - and I've learnt to use the leeway between "right" and "not wrong" to our advantage.
My experience has been that I prefer to use Scrum over traditional approaches as it hasn't happened often that requirements could stay unchanged for the length of a project where usually projects seem to run at least 6 months to my current one that is over a year.
There can also be the case where there isn't any project management and everyone just scrambles to "make it work" so having some formal structure is good over nothing. There is something to the question of how well does the team come together and egos rarely appear as it isn't someone's code but rather the code of the team and there is a kind of group think where while each person has their view, no one tries to make everyone else see things that way.
At times it seems to me that some Scrum and Agile approaches I've used end up being like rapids instead of a big waterfall. What I mean is that the cycle of gather requirements - Analyse and Design - Implement - Test - Deploy and get updated requirements seems to be repeated over and over so that what comes out in the end would be extremely hard to state at the beginning of the project unless the project sponsor could give very detailed requirements that would never change.

I need this baby in a month - send me nine women!

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
The proposed individuals to be added to the project must have:
At least a reasonable understanding of the problem domain of the project
Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
Have good communication skills
Be highly motivated (e.g. be able to work independently without prodding)
The existing team members must have:
Excellent communication skills
Excellent time management skills
The project lead/management must have:
Good prioritization and resource allocation abilities
A high level of respect from the existing team members
Excellent communication skills
The project must have:
A good, completed, and documented software design specification
Good documentation of things already implemented
A modular design to allow clear chunks of responsibility to be carved out
Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
Were late before you started (more
stuff than time) and/or
slipped 1hr, 1day at time.
Hope that helps!
It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
Maybe if the following conditions apply:
The new programmers already understand the project and don't need any ramp-up time.
The new programmers already are proficient with the development environment.
No adminstrative time is needed to add the developers to the team.
Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
If the new people focus on testing
If you can isolate independent features that don't create new dependencies
If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
I suppose the adding people toward the end of the work could speed things up if:
The work can be done in parallel.
The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
Self-contained modules that have yet to be started
Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.

Resources