How do I find a balance for negative tests in TDD? - tdd

Let's imagine we need to develop an app that simply shows you the local outside temperature. So, when you TDD you would write a test case:
describe 'App'
  it 'shows the local temperature'
That's it. A week later you realise people would pay for seeing full local weather forecast, so you continue TDD:
describe 'App'
it 'shows the local temperature'
describe 'when they pay me 5$'
it 'shows full weather forecast'
However, some people recommend to add a negative test to avoid false positives:
describe 'App'
it 'shows the local temperature'
it 'does not show full weather forecast'
describe 'when they pay me 5$'
it 'shows full weather forecast'
// or even
describe 'App'
it 'shows the local temperature'
describe 'when scrooge'
it 'does not show full weather forecast'
describe 'when they pay me 5$'
   it 'shows full weather forecast'
While this will prevent accidentally allowing not paying users to see the full weather forecast, this is not really scalable (imagine we have a lot of access layers, for example).
The question is: is there any well known or established practice that balances false positive tests in TDD?

The short answer is No. Just to give you a bit of a context, it all comes down to the importance of the requirements around negative tests. In the example you've given, if it is important for the users to prevent accidentally allowing not paying to see the full weather forecast, then you would consider adding (TDD) negative tests.
However generally with TDD, you write a positive test first, build your software for "happy path" scenarios. This way you have a working software first. TDD is all about designing in small.
Then if the requirements are critical around negative scenarios, then you would consider adding them later. This could be that once the software is in testing phase, you may identify that certain negative tests are critical. Therefore you can start adding them as required. Remember to have a mind set you only need to add them when they are absolutely needed.

However, some people recommend to add a negative test to avoid false
positives:
It doesn't avoid false positives. A false positive would be it 'shows the local temperature' passing while the application doesn't really show the temperature.
As Spock pointed out, systematically adding negative tests (tests that check that something doesn't happen) is problematic because there is no end to what can be tested that way.
I usually write regression tests for bugs that did happen in reality but rarely take chances at checking stuff that could hypothetically go wrong in a number of fuzzy ways.

Another way of thinking this is to rephrase the feature as "The app shows full weather if and only if the user has paid."
Then it's a matter of taste if you test this in one method or two methods. Something like this as pseudocode:
userSeesFullWeatherOnMainPageIfAndOnlyHeHasPaid:
freeUser = aNewUser()
freeUser.getsMainPage()
freeUser.shallNotSeeFullWeather()
paidUser = aNewUser()
paidUser.paysForFullWeather()
paidUser.getsMainPage()
paidUser.shallSeeFullWeather()

Related

TDD FIRST principle

I am not understanding how the TDD FIRST principle isn't being adhered to in the following code.
These are my notes about the FIRST principle:
Fast: run (subset of) tests quickly (since you'll be running them all the time)
Independent: no tests depend on others, so can run any subset in any order
Repeatable: run N times, get same result (to help isolate bugs and enable automation)
Self-checking: test can automatically detect if passed (no human checking of output)
Timely: written about the same time as code under test (with TDD, written first!)
The quiz question:
Sally wants her website to have a special layout on the first Tuesday of every month. She has the following controller and test code:
# HomeController
def index
if Time.now.tuesday?
render 'special_index'
else
render 'index'
end
end
# HomeControllerSpec
it "should render special template on Tuesdays" do
get 'index'
if Time.now.tuesday?
response.should render_template('special_index')
else
response.should render_template('index')
end
end
What FIRST principle is not being followed?
Fast
Independent
Repeatable
Self-checking
Timely
I'm not sure which FIRST principle is not being adhered to:
Fast: The code seems to be fast because there is nothing complex about its tests.
Independent: The test doesn't depend on other tests.
Repeatable: The test will get the same result every time. 'special_index' if it's Tuesday and 'index' if it's not Tuesday.
Self-checking: The test can automatically detect if it's passed.
Timely: Both the code and the test code are presented here at the same time.
I chose Timely on the quiz because the test code was presented after the controller code. But I got the question wrong, and in retrospect, this wasn't a good choice. I'm not sure which FIRST principle isn't being followed here.
It's not Repeatable as not everyday is Tuesday :) If you run this test on Monday you will get one result, if you run it on Tuesday, a different one.
F.I.R.S.T., F.I.I.R.S.T. and FASRCS
Yes, the confusion partly has the reason, that the F.I.R.S.T. principle is not complete or concise enough concerning the "I". In courses I attended the principle was called F.I.I.R.S.T.
The second "I" stands for "Isolated". The test above is independent from other tests, but is not isolated in a separate class or project.
[Updated]:
Isolation can mean:
A unit tests isolates functionality out of a SUT (system under test). You can isolate functionality even out of one single function. This draws the line between unit tests or their relatives component tests and integration tests, and of course to system tests.
"Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where." Ref.: History of FIRST principle
A unit test could be isolated from the SUT which it tests in a different developer artifact (class, package, development project) and/or delivery artifact (Dll, package, assembly).
Unit tests, testing the same SUT, especially their containing Asserts, should be isolated from each other in different test functions, but this is only a recommendation. Ideally each unit test contains only one assert.
Unit tests, testing different SUTs, should be isolated from each other or from other kind of tests of course further more in different classes or other mentioned artifacts.
Independence can mean:
Unit tests should not rely on each other (explicit independency), with the exception of special "setup" and "teardown" functions, but even this is subject for a discussion.
Especially unit tests should be order-independant (implicit independency). The result should not depend on unit tests executed before. While this sounds trivial, it isn't. There are tests which cannot avoid doing initializations and/or starting runtimes. Just one shared (e.g. class) variable and the SUT could react differently, if it was started before. You make an outside call to the operating system? Some dll will be loaded first time? You already have a potential dependency, at least on OS level- sometimes only minor, sometimes essential to not discover an error. It may be necessary to add cleanup code to reach optimal independency of tests.
Unit tests should be independant as much as possible from the runtime environment and not depend on a specific test environment or setting. This belongs also partly to "Repeatable".
No need to fillout twenty user dialogs before. No need to start the server. No need to make the database available. No need for another component. No need for a network. To accomplish that, often test doubles are used (stubs, mocks, fakes, dummies, spies, ..).
(Gerard Meszaros' classic work on: xUnit patterns, here coining the name 'test double' and defining different kinds of)
(Test double zoo quickly explained)
(Follow Martin Fowler 2007, thinking about stubs, mocks, etc. Classic)
While a unit test is never totally independant from it's SUT, ideally it should be independant as much as possible from the current implementation and only rely on the public interface of the function or class tested (SUT).
Conclusion: In this interpretations the word 'isolation' stresses more the physical location which often implies logical independence to some extent (e.g. class level isolation).
No completeness concerning potentially more accentuations and meanings claimed.
See also comments here.
But there are more properties of (good) unit tests: Roy Osherove found some more attributes in his book "The art of unit testing" which I don't find exactly in the F.I.I.R.S.T. principle (link to his book site), and which are cited here with my own words (and acronym):
Full control of SUT: the unit test should have full control of the SUT. I see this effectively identical as being independent from the test and runtime environment (e.g. using mocks, etc.). But because of independency is so ambigous, it makes sense to spend a separate letter.
Automated (related to repeatable and self-checking, but not the same)
identically). This one requires a test (runner) infrastructure.
Small, simple or in his words: "easy to implement" (again, related, but not identical). Often related to "Fast"
Relevant: The test should be relevant tomorrow. This one of the most difficult requirements to acchieve, and depending of the "school" there may be a need for temporary unit tests during TDD too. When a test is only testing the contracts, this is acchieved, but that may be not enough for high code coverage requirements.
Consistent result: Effectively a result of "enough" independency. Consistency is, what some people include in "Repeatable". There is an essential overlap, but they are not identical.
Self-explaining: In terms of naming, structure of the whole test, and specifically of the syntax of the assert as the key line, it should be clear what is tested, and what could be wrong if a test fails. Related to "Tests isolate failures", see above.
Given all these spedific points, it should be more clear than before, that it is all but simple to write (good) unit tests.
Independent and Repeatable
It is not independent from date and then it would able to run repeat but technically you get the same result because you choose to
The proper way to make a test for HomeController regarding to FIRST concept is change the time before the evaluation stage

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

When applying TDD, what heuristics do you use to select which test to write next?

The first part of the TDD cycle is selecting a test to fail. I'd like to start a community wiki about this selection process.
Sometimes selecting the test to start with is obvious, start with the low hanging fruit. For example when writing a parser, an easy test to start with is the one that handles no input:
def testEmptyInput():
result = parser.parse("")
assertNullResult(result)
Some tests are easy to pass requiring little implementation code, as in the above example.
Other tests require complex slabs of implementation code to pass, and I'm left with the feeling I haven't done the the "easiest thing possible to get the test to pass". It's at this point I stop trying to pass this test, and select a new test to try to pass, in the hope that it will reveal an easier implementation for the problematic implementation.
I'd like to explore some of the characteristic of these easy and challenging tests, how they impact testcase selection and ordering.
How does test selection relate to topdown and bottom up strategies? Can anyone recommend writings that addresses these strategies in relation to TDD?
I start by anchoring the most valuable behaviour of the code.
For instance, if it's a validator, I'll start by making sure it says that valid objects are valuable. Now we can showcase the code, train users not to do stupid things, etc. - even if the validator never gets implemented any further. After that, I start adding the edge cases, with the most dangerous validation mistakes first.
If I start with a parser, rather than start with an empty string, I might start with something typical but simple that I want to parse and something I'd like to get out of that. For me unit tests are more like examples of how I'm going to want to use the code.
I also follow BDD's practice of naming the tests should - so for your example I'd have shouldReturnNullIfTheInputIsEmpty(). This helps me identify the next most important thing the code should do.
This is also related to BDD's "outside-in". Here are a couple of blog posts I wrote which might help: Pixie Driven Development and Bug Driven Development. Bug Driven Development helps me to work out what the next bit of system-level functionality I need should be, which then helps me find the next unit test.
Hope this gives you a slightly different perspective, anyway - good luck!
To begin with, I'll pick a simple typical case and write a test for it.
While I'm writing the implementation I'll pay attention to corner cases that my code handles incorrectly. Then I'll write tests for those cases.
Otherwise I'll look for things that the function in question should do, but doesn't.

TDD - When is it okay to write a non-failing test? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
From what I understand, in TDD you have to write a failing test first, then write the code to make it pass, then refactor. But what if your code already accounts for the situation you want to test?
For example, lets say I'm TDD'ing a sorting algorithm (this is just hypothetical). I might write unit tests for a couple of cases:
input = 1, 2, 3
output = 1, 2, 3
input = 4, 1, 3, 2
output = 1, 2, 3, 4
etc...
To make the tests pass, I wind up using a quick 'n dirty bubble-sort. Then I refactor and replace it with the more efficient merge-sort algorithm. Later, I realize that we need it to be a stable sort, so I write a test for that too. Of course, the test will never fail because merge-sort is a stable sorting algorithm! Regardless, I still need this test incase someone refactors it again to use a different, possibly unstable sorting algorithm.
Does this break the TDD mantra of always writing failing tests? I doubt anyone would recommend I waste the time to implement an unstable sorting algorithm just to test the test case, then reimplement the merge-sort. How often do you come across a similar situation and what do you do?
There are two reasons for writing failing tests first and then making them run;
The first is to check if the test is actually testing what you write it for. You first check if it fails, you change the code to make the test run then you check if it runs. It seems stupid but I've had several occasions where I added a test for code that already ran to find out later that I had made a mistake in the test that made it always run.
The second and most important reason is to prevent you from writing too much tests. Tests reflect your design and your design reflects your requirements and requirements change. You don't want to have to rewrite lots of tests when this happens. A good rule of thumb is to have every test fail for only one reason and to have only one test fail for that reason. TDD tries to enforce this by repeating the standard red-green-refactor cycle for every test, every feature and every change in your code-base.
But of course rules are made to be broken. If you keep in mind why these rules are made in the first place you can be flexible with them. For example when you find that you have tests that test more than one thing you can split it up. Effectively you have written two new tests that you havent seen fail before. Breaking and then fixing your code to see your new tests fail is a good way to double check things.
I doubt anyone would recommend I waste
the time to implement an unstable
sorting algorithm just to test the
test case, then reimplement the
merge-sort. How often do you come
across a similar situation and what do
you do?
Let me be the one recommend it then. :)
All this stuff is trade-offs between the time you spend on the one hand, and the risks you reduce or mitigate, as well as the understanding you gain, on the other hand.
Continuing the hypothetical example...
If "stableness" is an important property/feature, and you don't "test the test" by making it fail, you save the time of doing that work, but incur risk that the test is wrong and will always be green.
If on the other hand you do "test the test" by breaking the feature and watching it fail, you reduce the risk of the test.
And, the wildcard is, you might gain some important bit of knowledge. For example, while trying to code a 'bad' sort and get the test to fail, you might think more deeply about the comparison constraints on the type you're sorting, and discover that you were using "x==y" as the equivalence-class-predicate for sorting but in fact "!(x<y) && !(y<x)" is the better predicate for your system (e.g. you might uncover a bug or design flaw).
So I say err on the side of 'spend the extra time to make it fail, even if that means intentionally breaking the system just to get a red dot on the screen for a moment', because while each of these little "diversions" incurs some time cost, every once in a while one will save you a huge bundle (e.g. oops, a bug in the test means that I was never testing the most important property of my system, or oops, our whole design for inequality predicates is messed up). It's like playing the lottery, except the odds are in your favor in the long run; every week you spend $5 on tickets and usually you lose but once every three months you win a $1000 jackpot.
The one big advantage to making the test fail first is that it ensures that your test is really testing what you think. You can have subtle bugs in your test that cause it to not really test anything at all.
For example, I once saw in our C++ code base someone check in the test:
assertTrue(x = 1);
Clearly they didn't program so that the test failed first, since this doesn't test anything at all.
Simple TDD rule: You write tests that might fail.
If software engineering told us anything, it's that you cannot predict test results. Not even failing. It's in fact quite common for me to see "new feature requests" which already happen to work in existing software. This is common, because many new features are straightforward extensions of existing business desires. The underlying, orthogonal software design will still work.
I.e. New feature "List X must hold up to 10 items" instead of "up to 5 items" will require a new test case. The test will pass when the actual implementation of List X allows 2^32 items, but you don't know that for sure until you run the new test.
Hard core TDDers would say you always need a failing test to verify that a positive test isn't a false positive, but I think in reality a lot of developers skip the failing test.
If you are writing a new piece of code, you write the test, then the code, It means that the first time you always have a failing test (because it executed against a dummy interface).
Then you may refactor several times, and in that case you may not need to write additional tests, because the one you have may be already enough.
However, you may want to maintain some code with TDD methods; in this case, you first have to write tests as characterization tests (that by definition will never fail, because they are executed against working interfaces), then refactor.
There are reasons to write tests in TDD beyond just "test-first" development.
Suppose that your sort method has some other properties beyond the straight sorting action, eg: it validates that all of the inputs are integers. You don't initially rely on this and it's not in the spec, so there's no test.
Later, if you decide to exploit this additional behaviour, you need to write a test so that anyone else who comes along and refactors doesn't break this additional behaviour you now rely on.
But what if your code already accounts for the situation you want to test?
Does this break the TDD mantra of always writing failing tests?
Yes, because you've already broken the mantra of writing the test before the code. You could just delete the code and start over, or just accept the test working from the start.
uh...i read the TDD cycle as
write the test first, which will fail because the code is just a stub
write the code so that the test passes
refactor as necessary
there's no obligation to keep writing tests that fail, the first one fails because there's no code to do anything. the point of the first test is to decide on the interface!
EDIT: There seems to be some misunderstanding of the "red-green-refactor" mantra. According to wikipedia's TDD article
In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented.
In other words, the must-fail test is for a new feature, not for additional coverage!
EDIT: unless you're talking about writing a regression test to reproduce a bug, of course!
The example you provided is IMO one of the proper times to write a test that passes first try. The purpose of proper tests is to document the expected behavior of the system. It's ok to write a test without changing the implementation to further clarify what the expected behavior is.
P.S.
As I understand it, here's the reason you want the test to fail before making it pass:
The reason you "write a test that you know will fail, but test it before making it pass" is that every once in a while, the original assumption that the test will surely fail is wrong. In those cases the test has now saved you from writing unnecessary code.
As others have said, the mantra of TDD is "no new code without a failing unit test". I have never heard any TDD practitioners say "no new tests without missing code". New tests are always welcome, even if they "happen" to "accidentally" pass. There's no need to change your code to break, then change it back to pass the test.
I have run into this situation many times. Whilst I recommend and try to use TDD, sometimes it breaks the flow too much to stop and write tests.
I have a two-step solution:
Once you have your working code and your non-failing test, deliberately insert a change into the code to cause the test to fail.
Cut that change out of your original code and put it in a comment - either in the code method or in the test method, so next time someone wants to be sure the test is still picking up failures they know what to do. This also serves as your proof of the fact that you have confirmed the test picks up failure. If it is cleanest, leave it in the code method. You might even want to take this so far as to use conditional compilation to enable test breakers.

TDD. When you can move on?

When doing TDD, how to tell "that's enough tests for this class / feature"?
I.e. when could you tell that you completed testing all edge cases?
With Test Driven Development, you’ll write a test before you write the code it tests. Once you’re written the code and the test passes, then it’s time to write another test. If you follow TDD correctly, you’ve written enough tests once you’re code does all that is required.
As for edge cases, let's take an example such as validating a parameter in a method. Before you add the parameter to you code, you create tests which verify the code will handle each case correctly. Then you can add the parameter and associated logic, and ensure the tests pass. If you think up more edge cases, then more tests can be added.
By taking it one step at a time, you won't have to worry about edge cases when you've finished writing your code, because you'll have already written the tests for them all. Of course, there's always human error, and you may miss something... When that situation occurs, it's time to add another test and then fix the code.
Kent Beck's advice is to write tests until fear turns into boredom. That is, until you're no longer afraid that anything will break, assuming you start with an appropriate level of fear.
On some level, it's a gut feeling of
"Am I confident that the tests will catch all the problems I can think of
now?"
On another level, you've already got a set of user or system requirements that must be met, so you could stop there.
While I do use code coverage to tell me if I didn't follow my TDD process and to find code that can be removed, I would not count code coverage as a useful way to know when to stop. Your code coverage could be 100%, but if you forgot to include a requirement, well, then you're not really done, are you.
Perhaps a misconception about TDD is that you have to know everything up front to test. This is misguided because the tests that result from the TDD process are like a breadcrumb trail. You know what has been tested in the past, and can guide you to an extent, but it won't tell you what to do next.
I think TDD could be thought of as an evolutionary process. That is, you start with your initial design and it's set of tests. As your code gets battered in production, you add more tests, and code that makes those tests pass. Each time you add a test here, and a test there, you're also doing TDD, and it doesn't cost all that much. You didn't know those cases existed when you wrote your first set of tests, but you gained the knowledge now, and can check for those problems at the touch of a button. This is the great power of TDD, and one reason why I advocate for it so much.
Well, when you can't think of any more failure cases that doesn't work as intended.
Part of TDD is to keep a list of things you want to implement, and problems with your current implementation... so when that list runs out, you are essentially done....
And remember, you can always go back and add tests when you discover bugs or new issues with the implementation.
that common sense, there no perfect answer. TDD goal is to remove fear, if you feel confident you tested it well enough go on...
Just don't forget that if you find a bug later on, write a test first to reproduce the bug, then correct it, so you will prevent future change to break it again!
Some people complain when they don't have X percent of coverage.... some test are useless, and 100% coverage does not mean you test everything that can make your code break, only the fact it wont break for the way you used it!
A test is a way of precisely describing something you want. Adding a test expands the scope of what you want, or adds details of what you want.
If you can't think of anything more that you want, or any refinements to what you want, then move on to something else. You can always come back later.
Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.
So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.
maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way
Alberto Savoia says that "if all your tests pass, chances are that your test are not good enough". I think that it is a good way to think about tests: ask if you are doing edge cases, pass some unexpected parameter and so on. A good way to improve the quality of your tests is work with a pair - specially a tester - and get help about more test cases. Pair with testers is good because they have a different point of view.
Of course, you could use some tool to do mutation tests and get more confidence from your tests. I have used Jester and it improve both my tests and the way that I wrote them. Consider to use something like it.
Kind Regards
Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.
Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?
There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.
I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).
You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.
Good luck and happy coding.
You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.
Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.
If down the road you ever find a new bug or come up with a way, add the test.
It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

Resources