Throwing out your first draft of work - is there a compatible methodology? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Are there any programming methodologies that take into account the concept that the first round of written code is likely to be not what you want to use? The most common thing I hear at the end of a project from a developer is 'If I could do that again, I'd do it so differently.' This is almost an exact mirror of the process a writer goes through after writing a first draft. The difference seems to be that writers then rewrite and rewrite again until they're ready to move into the editing stage, whereas developers seem to write and then refine their first draft with testing and refactoring.
I'm certainly no fan of trying to use alternative analogies to define the development process, but I do think there's value in recognising that your first draft is just to get ideas down, you need further rewrites in order to produce something worthwhile. I just don't think I've ever encountered a programming process or project methodology that recognises that, so I was hoping that the vast collective concious of Stackoverflow might have an idea of where I might start exploring this possibility?

Prototyping seems to address the problem in some way. The wikipedia article on Prototyping names an approach called 'Throwaway prototyping' which seems inline with your way of thinking.

What you are describing is called throwaway prototyping. The idea is that as soon as you have your preliminary requirements, you create a basic model of the system to show the user and/or customer what the final system might look like and how it might function (although there's no real functionality). The user provides feedback on this prototype.
If you wanted to utilize throwaway prototyping, my first suggestion would be to start looking at the spiral process model. However, I'm not familiar with very many methodologies that explicitly utilize throwaway prototyping. The more "agile" methodologies favor evolutionary prototyping or incremental prototyping. The only time I've ever personally used throwaway prototyping was to only prototype the user interface, as the underlying system was already under development and I used whiteboards and pen and paper for the prototypes.

Besides, this is one of Brooks's ideas which he himself found not the most effective after some revision: you've probably heard of "throwing away two systems after planning to throw away just one". Fortunately many such troubles can be overcome nowadays thanks to agile methodologies.

This is exactly the argument that Bruce Eckel is making here.

I would argue that the best thing to do is modularize very well. For example, if you're writing a kernel, the "Get the next available free memory frame" function should reside in its own function. That way, when you figure out that it's written in a really crappy way, you simple erase (of course you're using version control) and start from scratch. That way, your existing modules exist as a way to test your new code.
Going from start to finish and then start to finish again is an awesome way of going through a large percentage of the same bugs again.

Related

Deciding when to build your own or to reuse in a project with deadlines [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a project which requires some complicated components built. Some of these components are promised by some obscure software packages which are proving to be poorly documented and difficult to configure and use.
I am wondering where other people draw the line during their software research phase in deciding whether to build their own packages or sticking with trying the existing packages?
And what percentage of the total project time should I spend on this kind of research?
Thanks in advance,
Alex
Ask yourself which is likely to take longer, hammering the components to fit your needs or writing your own.
Personally I pretty much always use solid, comprehensive libraries (jQuery for web development, DevExpress for WinForms) and fill in the gaps with my own code.
The only exception I remember off the top of my head was a tooltip plugin for a web application. I tried like 3, wasting hours and hours adapting each of them to my needs, even modifying their source code, playing with their images, fixing obscure css tags that baffle ie7 (cause ie8 defaults in ie7 mode on the intranet), but never quite getting it right, then just gave up and rolled my own in half an hour.
Not to say there aren't plenty of good components out there that are flexible enough to be used in active development environments, but you're unlikely to find them in the heat of developing your stuff with deadlines looming overhead. Use your free-ish time to look for them and bookmark them, try them out in a few toy projects and see how they work, so the next time you need something like them you know what to use.
If you have to fix some minor bugs or otherwise have to observe some patterns the code doesn't currently take into account, consider contributing back into the code base as a good citizen.
If you find yourself having to substantially recode some pre-provided code to get it to work, then maybe the fact it was already "coded" is irrelevant. Bite down and chew.
If it's bologna and you need to reinvent the "wheel", consider that you've got a job that may not be compensating it's actual value.
I usually draw the line at about 1/10. Meaning if it has already taken me, say, 1 day and I still haven't gotten the off-the-shelf thing working and it would only take me 10 days to do it myself, I do it myself.
Even when it takes a little longer, it's often better in the long run to avoid the complicated, hard-to-use thing. Or, at the very least, I get a better idea of what I really need and I can pick an off-the-shelf package with my eyes wider open.
Well i think it all depends.
Given that, it is possible for you to spend more time than u would have used for development at trying to configure and understand. I would say if you are good enough to create faster than learn the improperly documented on then go for it.
Else if the existing promises great features and will not take too much of the entire project time then go for it. Often it is very difficult to draw then line. It all depends on the situation at hand.
Also you could look for alternatives to what u have now.

How to prove to colleagues that use-cases are important? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
... and how to prove to management that use-cases can be informal and still useful?
Hi folks,
I came in the middle of a project and found out that there are no use-cases, user-stories, requirements, neither anything similar to a specification. Since the deadlines are short, the current dev team don't want to spend time on such things. I wanted to join that project, but by digging more I found out that the current development adds features just by considering their "wow-effect" and chooses what to add just by using the easiness that the underlying technology provides. I was surprised how they have managed to go so far (more than 4 months) without requirements, but this is what we have now. I believe that the way they have chosen is the most sure one to kill the product which has a good marketing value.
Am I right, and what would you do in a similar circumstances to prove the dev team/management to make use-cases/requirements before moving forward? Thanks in advance, kh.
P.S. Two copies of Cockburn's book are on the bookshelf...
You should give your colleagues the use-case spiel :D Tell them that use-cases are useful as they're:
A way of capturing business processes in a manner which is reasonably comprehensible by all stakeholders. This helps to bridge the gap between programmers, clients and users.
Traceable units of functionality. Use-cases are formed (ideally) in the analysis phase, referenced in the design phase, and can be used as sources for test cases later on.
Quick and easy to write up and useful, even if informal.
If you need more ammunition, you might want to read Use cases - Yesterday, today and tomorrow by none other than Ivar Jacobson.
If your colleagues still can't see the potential usefulness of use cases as a business analysis tool, then they're probably beyond help :P You should remind them that they're developing software to meet other people's needs and solve their problems in the long term, not to ostentatiously impress them in the short term with petty gimmicks. And so a little bit of direction and specification helps. Even if the use-cases themselves don't prove to be that useful, the simple act of coming up with them will force your colleagues to consider the actual underlying purpose of the software.
Ask questions, of both sides. Of development, ask them if they are certain that all of the ways in which they have considered using the application are all of the ways in which the end-users will want to use it; if they say they have, ask for proof. Of management, ask if they've ever used software that does everything they want, but still ends up being hard to use (they will have). These questions will seed the concept that what will be delivered might not be what is desired, on both sides; use that seed of an idea, then, to open up discussions (not documents, not at the start) on how the software will be used, and in what way any differences can be resolved. They'll get around to use-case documents eventually.
I am a product manager by profession, and my first reaction to your post is that ideas can come from anywhere, and if the dev team has decent ideas they should be incorporated into the product.
Having said that, a product can not develop a soul (a simple message) through a string of disconnected ideas that do not serve the ultimate purpose: solving the needs of a target user. And, ultimately it boils down to making the case that time is better spent on requirements/use cases that make sense for the product, while the opportunity cost of not having a clear strategy/end goal will lead to too many chefs and a jaded product message.
The ultimate way to make this message hit home is to involve other stake holders and have development demonstrate their work. Eventually, there will be disagreement and a more formalized (less cowboy) approach will lead to a more refined and simple product.
One of the problems you mention is tight schedule and scope creep induced by the devs themselves. Explain them, that by using use cases you can earn time by dropping features, which will potentially end up on the "never used" pile. With use cases you can find out what are the features customers need and will pay for and by removing unimportant features out of the scope you would have time to implement. Use cases apart from defining the scope also help to identify all the stakeholders, which might help you to focus even better while defining the scope and prevent forgetting about trivial things, which are not so apparent, but are a must if the product should be usable. The third most important thing about use cases is that they allow you to start thinking about corner cases which might be important for the customer before development and therefore you can find out with the customer what would be the ideal solution instead of letting the coder decide on his/her own under pressure of deadline.
Just show them.
Example is not the best way of educating people, it is the only one.
Lead by example focusing on extensions and exceptions. In other words emphasize the failure scenarios because everyone knows how the system should work. The real value of written Use Cases is identifying what should happen when something goes wrong.
That noted, consider you may have to live without written use cases. And, for the environment you describe, a major win is any sort of requirements documentation. Screen comps and/or prototyping are often easier to introduce.

Selling TDD to the team [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have been doing TDD for the past 3 years. We were a small company, and we had very solid support for most aspects of the agile process from management. Everyone on the development team was sold on the process. And thus, the upfront investment it usually takes to build fixtures was accepted knowing it would pay off along the way. (Code that starts an http server, code that populates sql databases before tests, etc). Documentation mostly happened in the tests and help requests were usually presented in the form a failing test.
Now I moved to a bigger company, and while management is supportive of the Agile process, teammates are a mixed bag, some of them see it useful, some of them do it because of management and some don't see the value. It's been a challenge to convince people to spend some time building fixtures or to convince a team member the best way for me to help him if he took the time to write a failing test.
So what do you think is the best way to sell TDD to a hesitant teammate? The objections are usually : 'It's an unneeded cost', ' we can always write tests after the fact for parts that are important', 'it's a buzz word, teams pick it up and then it falls to the side as the heavy grind begins' etc.
"the best way to sell TDD to a hesitant teammate"
You can't. Don't waste time "selling".
Instead, invest time in "proving".
Just do it. Be successful. When people ask what the secret of your success is, then reveal the TDD. Not before.
simple -- maintainability. TDD gives you the ability to make changes, and see where those changes affect the rest of the code. The larger the code base, the more imperative it is that there be tests to validate any new changes.
correctness. Although tests can themselves be broken, eventually they reach a point where they make sure the components are doing what they are supposed to. The better the developer, the faster that is.
another advantage is that TDD informs the design of the components in the system. If you are trying to test something, and the test is too complicated, it probably means you need to break the problem down into smaller parts...
to sell it to people, you say that in the long run it makes adding new features cheaper, and reduces the risk of breaking existing functionality. So it reduces cost.
For the hesitant teammate, be patient, wait for an opportunity, then pounce. In software development there will undoubtedly be an problem where TDD would have prevented or mitigated the problem. Be on the lookout for such an opportunity. Work with him/her to create a test(s) that should have been developed from the beginning. However, make sure you craft your message in such a way to not embarrass your teammate.
I agree with S. Lott, you can't "sell" them you need to show the value.
One of the most effective ways to do that is with pair programming. Granted you have another "sell" problem convincing people that pairing is an effective approach, but after some time you may convince/convert a developer or too.
TDD was a tough concept for me initially, but now I can't imaging programming any other way.
I think Joel's post explains very well why testing is A Good Thing™.
I don't think he ever uses the phrase "TDD", but it's got some great info.
Show them this site: WeDoTDD.com - actual company team use cases. Those who are successfully practicing TDD in real companies.

The effects of design on application delivery time [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Some developers when given a task go straight into the IDE and start coding with very little design. They may have an idea of where the application is going as they are coding. I am 1 of these developers. I do this because I feel that if I spend a lot of time designing my application delivery time will be much higher as compared to if I just sit and code away the ideas in my head. My question is that how does application design effect the delivery time of the project and does it have a big advantage over coding the agile way?
Give you a example, when you want travel,
If your destination is near or in your town, you can start right away.
When you want travel to another country, you need package your self first.
Design is for preparation, without it , you cannot go too far(or go the wrong way).
It is not a black and white situation: for some projects it is much better to jump in and start coding, for some it's better to have an extensive planning stage, and for others it is not clear cut.
If the project is small and simple enough that only a single developer is working on it, and how to build it is obvious enough that they can imagine every aspect of it in their head, then they can very well jump in and start designing.
The need for more extensive planning comes about when you have multiple developers, or when the project is large and complicated enough that a developer cannot know everything possible about how it will work from the outset, because it is too complicated to know all aspects of it in your head.
What you describe only works well if you are writing something well well understood and contained. If it is similar to other software you've written you don't need a new design because you can just re-use the old one. however, if it is something totally new, designing on the fly will get expensive. You'll find yourself rewriting too much of the code or worse, stuck with a poor architecture which slows you down. Likewise, if you need your code to be extensible, planning ahead is necessary. If you need it to work with components from other people, planning ahead is necessary.
This approach only really works well if you are working on your own. If you have to work within a team of people, it is important to have a good plan so that everyone else knows what you think is the end goal. This doesn't reduce creativity it just allows you to make sure everyone is on the same page, and it reduces the opportunities for confusion.
(source: yang.id.au)
Just to add a line of thought to your equation scenario, let me contribute this little bit hereafter: I work in a business called YES INTERNATIONAL CORPORATION (www.yesintl.com.au) Sometimes, it does happen that developers may have developed something before so in that case the design is already in the mind. For example, I have developed database solutions in the past which makes us a very fast delivering corporation compared to our competition when I sit down and start developing a project. More experience will make you super perfect as the time goes by... I hope this helps... Andy

Need for refactoring? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why should companies invest in refactoring components, though it is not going to add any new feature to the product ?
I agree it is to clean the code, fix bugs and remove dead code - but what is the take ??
Maintenance. It will reduce your maintenance costs significantly. There is no comparison between fully factored code and the junk that sits in most companies repositories. The latter is virtually worthless, while the former is gold.
It depends who you ask. A non-technical manager may say there is no need. A support developer would say that it would help keep the maintenance costs down.
Refactoring needs to be a part of your every-day job. You constantly refactor your code to make it more readable/maintainable/robust/reusable, etc.
Your code is a living document. If it doesn't change over time, it becomes stagnant.
Invest in testing. Invest in refactoring. Invest in writing good code.
maintenance. Sometimes, a project gets to big or with too many "fast" patches to be further expanded. You just have to sit down calmly and clean and refactor.
While the other answers are all true the power of Refactoring is that it allows you to change the design of your code with predictable results. The biggest problem of maintenance is that it is virtually impossible to anticipate all requirements for complex applications.
Most of these can be dealt with by adding a new feature like a new report or command. But other will require part of your application to be redesigned. This is where refactoring and it's sibling unit-testing comes into play. By using refactoring techniques you can make needed design changes safely.
It is not a cure all technique but another tool that improves the quality of your code. (stuructured programming, object-orientation, etc).
To start off with: Refactoring is a tax. If the code works, then you are spending time fixing code that already works, I can see the business types looking quizzical now. A saying I like is "Legacy is another word for code that works."
Now there are many problems with a growing code base that need to be addressed before you start to spend more time maintaining the code that developing features.
Personally I like the "No Broken Windows" philosopy.
If it ain't broke don't fix it.
But if you need to start fitting the components into new unpredictable requirements then it often makes sense to identify the bits you can extract and reuse. You need to be certain that your changes are introducing unexpected bugs - so you'll need good test coverage.

Resources