Is refactoring applicable to test code - tdd

I am learning to use test driven development. As a part of this I need to write few modules.
My question is: will refactoring also be applicable to test code?

Tests are code and are the pillars of your application.
So yes, you should refactor your test code with the same care you take for refactoring production code.

A good public API usually decouples well from implementation, so at those lucky moments the refactoring of implementation should cause none or very small changes to tests (and that's sort of proof of good API design).
But I often test also some stuff from inside, when the public API is of high abstraction, doing lot of processing and algorithms in single call, then refactoring code often hits the tests too and requires similar amount of work.

Related

How does TDD drives the design?

I understand that TDD has many advantages (some of them are below). How ever I am not sure how it drives the design?
Serves as documentation
writing tests before actual code helps maximum test coverage
Help determine input value boundaries
Usually when we start implement new piece of functionality, we will have rough idea of the design. And we start with TDD implementation of a class, which is used by other classes as per design. This understanding of mine seems to be in conflict with statement "TDD drives the design"
Please help me understand this by an example.
Most people think that Test-Driven Development is tool to write code with lesser number of bugs. But ,in reality, that is the by-product of TDD. TDD is more of a tool for code designing.
Overall, TDD helps in quality code development in following ways:-
It makes you think about your code design and requirements at every stage, thereby ensuring that you are actually implementing what is required.
You are forced to write testable code, thereby ensuring that your code has loose coupling and high cohesion.
If your code is getting difficult to test, mostly is signifies that there is some issue with your design(Your code is too coupled or not isolated enough)
With that said, I tend to disagree with people that think if you follow TDD blindly you'd always end up with good code design(because that depends more on you and your knowledge of Sofware Design), but I believe there is a good chance you would.
TDD doesn't drive design, but it's an excellent feedback mechanism, which enables you to get rapid feedback on your design ideas.
It also has the beneficial side-effect that it leaves behind a nice regression test suite.
The most basic idea behind this, is that when you want to test something, you want to keep the tests simple and focused. This in turn forces you to start with the smallest bits of your problem, driving towards the Single responsibility principle.
Another thing, for larger problems, is when you are forced (because you want your tests to be focused) to use mocks and stubs. This drives your design towards using the Dependency inversion principle, which in turn makes your code loosely coupled.
TDD in general
The goal of Test-driven Development (TDD) is to create software components that are precisely conforming to a set of functional specifications. The task of the tests is to validate conformity and to identify disparities from these specifications.
Design Guidance
By writing tests first, you're forced to make up your mind what your software component should do. Thinking about meaningful test cases first will guide your design, as you're forced to find answers to a set of questions: How should process inputs? To which components does it interface? What are the outputs I'm expecting? ...
TDD does not replace SW design. It sharpens your design!
In essence, TDD will help you to sharpen your design - but one thing is crucial: You need to find reasonable tests. And trying to replace functional specifications - call it a requirements documents - with tests only, is rather dangerous. Why?
Unit tests as used by TDD can only guarantee that your components exhibit the expected behavior for the given test data, but not for any data! As Dijkstra said in the 1960s: "Testing shows the presence, not the absence of bugs".
> Usually when we start implement new piece of functionality, we will have rough idea of the design.
That's the core thing of your question. If you just have a rough idea of the design, you better should spend more time at the drawing board, asking your self questions like: What are the individual tasks my software should carry out? How can I split the general task into subtasks? How can I map these subtasks to components? Which data needs to be passed among them?
At that time, you might consider doing TDD. But without thinking about a design or software architecture first, you'll end up with a "spaghetti system" that is hard to understand and hard to maintain later on.
Great observation by Mark Seemann. The principle that pushes me to value TDD is fast feedback (and so fast learning). You can't substitute good design techniques for just doing TDD - learn good design principles and use TDD to provide fast feedback on your work.
I find that when using TDD most deeper thinking about my design happens during the refactor step and the tests then provide a space for that step and a safety net for change, but it doesn't automatically create good design.
TDD allows me to hack some working code and use the learning I get while doing that to iterate towards a better design. Of course this can be done without TDD too, but the test helps by providing that safety net.

Does TDD preclude designing first?

I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.

TDD and UML together

I'm new to TDD approach so I'm wondering if anyone experienced wit this could enlighten me a little. I would like to get some leads how to use UML and TDD methodology together.
I've been used to: Design with UML --> Generate skeleton classes (and then keep it synchronized) -> Implement and finally Test. I must admit that testing part was the worst one, so I started to look for something else - TDD. So I have some general knowledge what is it about but before I proceed further, I am interested knowing how it goes together with software design especially UML.
So when I first design/create test, how can UML fit in? Would it be possible to design classes first, from them create skeleton classes, from them generate Unit tests which would be "filled" before actual implementation of UML pregenerated classes, would this approach break whole TDD? Or is there any other way that would keep UML and TDD together?
The TDD cycle is test, code, refactor, (repeat) and then ship. As the TDD name implies, the development process is driven by testing, specifically that means writing tests first before developing or writing code.
The first paragraph is purely definitional ... from how Kent Beck defines TDD ... or how Wikipedians generally understand TDD ... I thought it was necessary to belabor the point with this definition, because I am not certain whether everyone is really discussing the same TDD or if others really understand the implications the [most-important part or the] definiton of TDD, the implications of writing tests first. In other words, I think more of the focus of the answers to this question should delve a bit deeper into TDD, rather than explaining a bias for/against UML. The lengthy part of my answer relates to my opinion on using UML to support TDD ... UML is just a modelling language, certainly not required to do TDD; it could get in the way if applied inappropriately ... but UML can help with understanding requirements to write tests, how modeling can help refactor if needed and how collecting the artifacts of the process speeds the documentation of shipped code. I would welcome any comments, criticisms or suggestions, but please don't vote my answer up or down because you agree or don't agree with the first paragraph ... the TDD acronym is Test-Driven Development which means Test-First Development.
Writing a test first implies that the developer understands the specifications and requirements FIRST ... obviously, any test is written should fail until the code gets written, but in TDD, the test must be written first -- you can't do TDD without being focused on understanding a requirements specification before you write tests, before you write code. There are situations where the requirements do not exist at all; requirements elicitation involves a bit of hacking a pre-pre-alpha version to "throw mud at the wall to see what sticks" ... that sort of thing should not be confused with development, certainly not test-driven development, it's basically just one form of requirements-elicitation for a poorly-understood application domain.
UML diagrams are one form of requirements input to TDD. Not the only one, but probably better than written specifications if people who are knowledgeable in creating UML diagrams are available. It is often better work with visual diagrams for better communication in exploring the problem domain [with users/clients/other systems providers] during pre-implementation requirements modeling sessions ... where simulating performance is necessary for really understanding requirements (e.g. CANbus network interactions); it is often ideal if we can work with a specification language or CASE tool like Rhapsody or Simulink RT Workshop that can be executable, modular and complete ... UML is not necessarily part of TDD, but it is part of an approach design synthesis that involves expending more effort understanding what is required first before writing any code [and then wasting time trying to market that code to someone who cares]; generally, more focus on understanding the systems requirements lays the groundwork for more efficient, productive agile TDD iterations.
Refactoring is about improving design -- cleaning up working, tested code to make it simpler, easier to maintain. You want to tighten it up as much as possible to remove obfuscated tangles where for bugs might be hiding or could spawn in future releases -- you don't refactor because a customer requires it; you refactor because it's cheaper to ship clean code rather than to continue to pay the cost of supporting/maintaining complexicated mess. Generally, most of TDD is more code-focused; but, you could employ UML for looking at larger scope or taking a step back to look at the problem, e.g. creating a Class diagram to help identify [missing] tests or possible refactorings. This is not something you'd need to mandate or want to do on a general basis, but where appropriate.
The last step, 'ship' is a serious step ... 'ship' is not shorthand for "toss it over the wall and forget it, because good code doesn't need support" or "abandon and hope that there are no more iterations." From a financial or business perspective, shipping is the most important step of TDD, because it is where you get paid. Shipping does a involve "shifting gears" because it includes systems integration, preparation for support and maintenance, getting ready for the next round of development, etc. The primary use of UML diagrams will be to communicate [in abstract terms] how the code does what it does ... UML is useful because hopefully the diagrams are an artifact of the requirements and development processes; it's not necessary to start from scratch when the code ships ... as a communication tool, UML would be appropriate for reducing integration errors multi-module systems, larger projects that might involve modules written in different languages, networks of embedded systems where different companies must collaborate on safety-critical systems but need the abstraction to be stingy with or protective of their "proprietary knowledge."
Just as you should avoid using big hammer in situations where a tiny screw driver is appropriate OR you aren't going to get anywhere by asking all developers to standardize on using Emacs as their editor. Sometimes the view is not worth the climb -- you don't want to always haul out the UML banner or become known a the guy who was alway pushing UML ... particularly not in situations where there is no substitute for writing tests or reading code. If you stick to appropriate situations, you should not be afraid to use UML as a communication language in all situations where the language helps you.
So when I first design/create test,
how can UML fit in? Would it be
possible to design classes first, from
them create skeleton classes, from
them generate Unit tests which would
be "filled" before actual
implementation of UML pregenerated
classes, would this approach break
whole TDD? Or is there any other way
that would keep UML and TDD together?
If you create an entire skeleton class - by which I assume you mean a class with all methods defined but empty - before writing your first test, then I would say you are not doing TDD, and lose the benefits of TDD. As we do TDD - Test-Driven Design - our tests incrementally lead us to the next elements - methods and classes - that our program needs. If you have predetermined, pre-specified in UML, what your classes and methods are, a great deal of your design is already done, and either your subsequent development is constrained to follow it, or you have wasted effort that subsequent work will undo.
There may be ways to use both UML and TDD together for design, but as you have described it, you are doing the design within UML, before TDD ever gets a chance. That won't give you the full benefit of TDD.
Or is there any other way that would keep UML and TDD together?
UML and TDD fit wonderfully together:
Make the initial design in UML - it doesn't have to be complete, just consistent, self-contained
Create the empty tests. This step could also be automatized
All tests will fail at first, as required by TDD (because the generated code from UML does not have any code)
Start writing tests for each class
Start with classes which do not have a great deal of associations if you are confident with your software architecture and UML skills (no, you're not doing waterfall, but sometimes you just know what you're doing - you know the application domain already or you have used expert knowledge at step 1)
Start with classes which have a lot of associations ("central classes") if you are NOT confident in your understanding of the application domain - this will make it easier to eliminate as soon as possible bad design decisions, because you will notice them as early as possible
... The tests are still failing
In parallel to each unit being tested (step 4), write the implementation inside the empty method bodies. DO NOT modify any class, interface or method names or parameter signatures. You ARE only allowed to add private helper methods, NOT more
If at step 6 (which is run in tandem with step 4) you realize you need to make changes in the design:
Go to step 1 and refine the UML, then generate the code again (good UML tools will not overwrite your implementation). Attention: avoid introducing new classes. You want to finish step 13 within a few weeks
Re-run the tests and fix the ones failing which were previously OK
Continue with what you left at step 6
Go to step 6 if not all class tests pass
Proceed to component, package and subsystem tests
Add deployment to UML and deploy to the integration environment (http://en.wikipedia.org/wiki/Development_environment_%28software_development_process%29)
Proceed to integration tests
Go through the test/QA stage
Go through the User Acceptance Testing
Reiterate the previous steps as required (according to your iterative development process)
... months pass
Deploy version 1.0.0 to production
Do not try to do arrive at many design decisions at step 1 or following reiterations of step 1 (refining the design). You want to finish step 13 in the first iteration after a few weeks.
While some people think UML is a design methodology, it is first and foremost a communication tool. Hence the name, Unified Modeling Language. The idea is to have common vocabulary (of shapes) that you can insert in a book and everybody will understand.
TDD on the other hand is a design methodology, the way to construct the system starting from its interface and its consumer, and only then adding the implementation.
Once your design has emerged as a result of applying TDD, you can communicate that design using UML. If you don't need to communicate (like if you're the only developer), you arguably don't need UML.
Some people think of domain analysis (identifying key Nouns, Verbs and Adjectives and building a basic ontological model) as being a part of UML methodology, reflected in the use case & ER diagrams... This may be useful to do before you jump into TDD red/green/refactor cycle. Then again, strictly speaking this is DDD (Domain Driven Design), not UML per se.
If you're designing classes using UML, you're designing what you think the code will look like. But what you create using TDD is the reality.
Leave UML until the end, to document the results of the code you created using TDD.
UML is the design part, of course.
I'd only use it on the system I was ending up with. Rendering test classes in UML seems ludicrous to me.
I'd use it to get a general sense of what I was creating. Then I'd start with TDD to implement it.
As far as keeping it in synch, I'd get a UML tool that was capable of import and export. I'd worry more about the code and less about UML. It's useful for capturing your thoughts and documenting later, but not much else. I'd always prefer working, tested code over diagrams any day.

Test Driven Development is the way to go. But how should it be done?

A number of developers are designing their applications using Test Driven Development (TDD) but I would like to know at which stage should I incorporate TDD into my projects? Should I first design my classes, do some unit tests and determine what methods are needed or design methods first and then do some unit tests after?
What is the best way to go about this?
TDD is a coding and design-in-the-small technique. It is not a big-picture envisioning technique. If you starting to write an application, you want to do some storyboards, or wireframes, or even some simple sketches. You should have an idea about the larger-scale design i.e. the classes and relationships in your system. Before you get to the point where you'd start to do interaction design (e.g. methods and arguments) you start doing TDD.
The wireframes you have done will give you an idea of how the system should appear. The large scale design will give you an idea of what classes to create. However, none of these models should be considered correct or permanent. As you write tests, you'll find better design ideas that will change your high-level design, and even your wireframe design.
The point of it being test driven is that the tests drive the design as well as the implementation.
In some cases that's not appropriate - it may be blatantly obvious what the design should look like, especially if it's implementing an existing interface (or some similarly restricted choice) but when you've got a large "design space" TDD encourages you to writes tests demonstrating how you want to be able to use the class, rather than starting off with you how you think you want to implement it.
Generally if something is easy to test, it will be easy to use.
The mantra for test driven design:
create the test.
compile test (and verify that it fails)
write the interface
compile test (it should compile now)
run test (it should fail now)
write the code
compile/run test (it should do both now)
Repeat for each item.
I'm very skeptical about writing unit tests before implementation. Maybe it would be efficient if you had a good idea about the exact design about the classes and methods required but generally at the start of a new project you don't.
My preferred approach is to acknowledge that the start of a new project can be a bit of organised chaos. One part of the design process is writing code to validate certain ideas or to find the best way. A lot of time the code hits a dead end and gets thrown away and any unit tests written for it would have been a waste of time.
Don't get me wrong I'm all for well designed software and at a certain point it's essential that the organised chaos morphes into a well understood design. At that point the class structure and UML diagrams start to solidify. This is the point TDD becomes important to me. Every class should be able to be tested in isolation. If the design is good with the right degree of decoupling that is easy to achieve usually with a sprinkling of mock objects around the place. If the design is bad testing becomes a real nightmare and consequently gets ignored.
At the end of the day it's high quality productive code that you're after. Unit tests are one of the best tools to help achieve that but they shouldn't take on a life of their own and become more important than the code they are testing which you sometimes get the feeling of from the TDD mantra.
Start small
a) The next time you add a method to a Utility class, write the tests first for the method. Methods that do string handerling are a good place to start, as they tend to have lot of bugs, but are don’t need complex dataset to test them.
b) Then once you get confidant with writing unit tests, move on the classes that don’t touch UI and don’t talk directly or indirectly to the database.
c) You have then got to decide if you will design your application to make unit testing easier. E.g
it is hard to test UI code so look at
MVP etc,
it is head to test code
that access the database, so separate
your logic into method that don’t need
to access the database to run.
a) and b) has a big payback without having to change how you design the overall system. You have to consider if c) is worth the cost.... (I think it is, but a lot of people don't and often you can't control the overall system design.)
Ideally you will start to apply TDD when you begin coding. Think about your design first, you can draw diagrams on paper ; no need to use CASE tool, just envision the main classes and their interactions, pick one, write a first simple test, write the code to make it pass, write another test, make it pass, refactor ...
If your process mandates documenting and reviewing the design before the coding stage, then you cannot follow TDD and let your design emerge. You can however make your design first, and then do the implementation test-first. You'll get unit-tests as if you applied TDD, but still applying your process.
Keep your design as hi-level as possible, so that you can let the low-level details of the design emerge.
Please look at
http://en.wikipedia.org/wiki/Test-driven_development
They have stated a high level tasks
TDD ensures that developers give equal importance to test cases as compared to their business logic. I followed it in my project and i could see the advantages: Since i started with writing my test cases first, i ensured that my test cases handle all the possible coverages of a business logic, thereby catching/reducing the number of bugs. Some times its not practical since it requires enough time to write these test cases.
The bottom line is you need to unit test your code really well.So either you start with implementation and then ensure that there are enough test cases for all the scenarios (Most people dont hence this practice has emerged.) or write the test methods do a dry run and then write your implementation. Your implementation is said to be complete only when all your testcases pass successfully.
You should listen to stack overflow podcast 41 where they talk about TTD. It's hard to say everybody is using TDD and it would be take a lot of time and resources to have a high code coverage percentage. Unit testing could effectively double the development time if you write tests for all of your code.
One of the points from the podcast is that your time and resources could be better used doing other tasks such as usability and new features.

What are some common misunderstandings about TDD?

Reading over the responses to this question Disadvantages of Test Driven Development? I got the impression there is alot of misunderstanding on what TDD is and how it should be conducted. It may prove useful to address these issues here.
I feel the accepted answer was one of the weakest (Disadvantages of Test Driven Development?), and the most up-modded answer smells of someone who might be writing over specified tests.
Big time investment: for the simple
case you lose about 20% of the actual
implementation, but for complicated
cases you lose much more.
TDD is an investment. I've found that once I was fully into TDD, the time I lost is very very little, and what time I did lose was more than made up when it came to maintence time.
For complex cases your test cases are
harder to calculate, I'd suggest in
cases like that to try and use
automatic reference code that will run
in parallel in the debug version /
test run, instead of the unit test of
simplest cases.
If your test are becoming very complex, it might be time to review your design. TDD should lead you down the path smaller, less complex units of code working together
Sometimes you the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind.
This is the worst point of them all! TDD should really be "Test Driven Design". TDD is about design, not testing. To fully realise the value of benefits of TDD, you have toy drive your design from your tests. So you should be redoing your production code to make your tests pass, not the other way round as this point suggests
Now the currently most upmodded: Disadvantages of Test Driven Development?
When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one.
Like the accepted answers first point, this seems like over specification in the tests and a general lack of understanding of the TDD process. When making changes, start from your test. Change the test for what the new code should do, and make the change. If that change breaks other tests, then your tests are doing what their supposed to do, failing. Unit Tests, for me, are designed to fail, hence why the RED stage is first, and should never be missed.
IMHO The biggest misconception about TDD is that: time spent writing and refactoring tests would be time lost. The thinking goes like "yeah, a test suite is nice, but the feature would be complete much faster if we just coded it".
When done properly, time spend writing and maintaining tests is saved multiple times over the life of the project in time not spent debugging and fixing regressions. Since the testing cost is up-front and the payoff is over time, it is easy to overlook.
Other big misconceptions include ignoring the impact of TDD on the design process, and not realizing that "painful tests" is a serious code smell that needs fixing quickly.
I see a lot of people misunderstanding what tests actually are usefull to TDD. People write big acceptance tests instead of small unit tests and then spend far too much time maintaining their tests and then conclude that TDD doesn't work. I think the BDD people have a point in avoiding the use of the word test entirely.
The other extreme is that people stop doing acceptance testing and think that because they do unit testing their code is tested. This is again a misunderstanding of the function of a unit test. You still need acceptance tests of some sort.
The misconception that I often see is that TDD ensures good results.
Often times tests are written off of flawed requirements, and therefore, the developers produce a product that does not do what the user is expecting. Key to TDD is, in my opinion, working with the users to define requirements while helping manage their expectations.
These are the issues that in my opinion are quite controversial and hence prone to misunderstanding:
In my experience the biggest advantage is producing far better code at the cost of a lot of time spent writing tests. So it's really worthwhile for projects that require high quality, but on some other, less quality centric sites, the extra time will not be worth the effort.
People seem to think that only a major subset of the features must be tested, but that is actually wrong IMHO. You need to test everything in order for your test to be valid after refactoring.
The big drawback of TDD is the false sense of security given by incomplete tests: I've seen sites go down because people assumed that Unit Testing was enough to trigger a deployment.
There is no need of mocking frameworks to do TDD. It's just a tool for testing some cases in an easier way. The best unit tests though are fired high in the stack and should be agnostic on the layers in the code. Testing one layer at a time is meaningless in this context.
Just chucking another answer in the pot.
One of the most common misunderstandings is that your code is fixed, ie. I have this code, now how on earth will I test it? If it's hard to write a test, we should ask the question: how can I change this code to make it easier to test?
Why..?
Well The sort of code that's easy to test is:
Modular - each method does one thing.
Parameterised - each method accepts everything it needs and outputs everything it should.
Well Specified - each method does exactly what it should, no more, no less.
If we write code like this, testing is a doddle. The interesting thing is that code that is easy to test is, coincidentally, better code.
Better as in easier to read, easier to test, easier to understand, easier to debug. This is why TDD is often described as a design exercise.

Resources