PHPUnit test dependencies - laravel

A similar question has been asked before but I don't quite understand the answer. My specific case is that I have a unit test which tests the registration of a user via a REST API endpoint. User registration however depends on a few records which must exist in the database, otherwise it will fail. Inserting these records into the database is most definitely a test case by itself too. So my question is, should I execute my tests in a specific order in order for the records to exist, or should I explicitly insert the records again in every testcase that depends on it?
It might be somewhat irrelevant but I'm using Laravel 5, so testing is done in PHPUnit.

should I execute my tests in a specific order in order for the
records to exist, or should I explicitly insert the records again in
every testcase that depends on it?
I think the correct answer here is that you should not do either (but please read on, it might still be ok to do the latter, though not perfect).
If you say registering the user is a test case in itself. Very well then, write that test and let's assume you have that test in what follows.
Creating tests so that they run in order
Lets deal with the first option of running the creating those rows once and then running multiple tests against them.
I think this is a very flawed approach no matter the circumstances. All of a sudden all tests depend on one another.
Say you run test A, B and C on those rows. Maybe it's even the case that right now none of them alters the rows. But there is no way you can be sure that no bug is ever introduced into B that alters data ( mustn't even be a bug, could just be that the underlying functionality is changed ).
Now you're in a situation where test C might pass, but only if B did not run before. This is an entirely unacceptable situation, especially when the reverse is true, C only passing if B ran.
This could show in say a fresh installation of your App throwing errors in real life, while your development setup containing a bunch of data works and so do the tests because B created a certain state in your database ( that maybe also exists randomly in your dev database ).
Then you give it out to some poor customer and all of a sudden "option X" is not set, or the initial admin user does not exist or whatever :)
=> bad plan
Running the Setup for Every Test that depends on it
This is a significantly better plan. Now you at least have full control of your database state in every test and they all run independent of one another.
The order of them running will not affect outcome
=> good
Also this is a relatively standard thing to do for a subset of tests. Just subclass your main UnittestCase class and make all tests depending on that function subclasses of that thing like so:
abstract class NeedsDbSetupTestCase extends MyAppMainTestCase {
function setUp(){
parent::setUp();
$this->setupDb();
}
private function setupDb(){
//add your rows and tables and such
}
}
=> acceptable idea
The Optimal Approach
The above still comes some drawbacks. For one it isn't really a unittest anymore once it depends on very specific database interactions, which makes it less valueable in exactly pinpointing an issue. Admittedly though this is in many cases more a theoretical than a practical issue :)
What will much more likely become a practical issue though is performance. You are adding a bunch of database writes that might need to be run hundreds of times once your test suit grows. At the beginning of your project this might mean that it takes 4s to run it instead of 2s :P ... once the project grows you might find yourself losing a lot of time because of this though.
One last issues you might also face is that your test suit becomes dependent on the database it's run against. Maybe it passes running against MySQL 5.5 and fails against 5.6 ( academic example I guess :P ) => you might have all kinds of strange behavior with tests passing locally but failing in CI and whatnot (somewhat likely depending on your setup).
Since you are interesting in this in a more generic sense, let me outline the proper way of handling this here generically too :)
What it will always come down to is that a situation like this causes you trouble:
class User {
private $id;
public function get_data(){
return make_a_sql_call_and_return_row_as_array("SELECT properta1, propertyb FROM users WHERE id = " . $this->id);
}
}
Now some other method is to be tested that actually uses the return of get_data() and you need that data in the db :) ... or you just mock your User object!
Assuming you have some method in another class that uses that User object.
And your test looks a little something like this:
// run this in the context of the class that sets up the db for you
$user = new User($user_id);
$this->assertTrue(some_method_or_function($user);
All you need here from $user is to say return the array [1,5]. Instead of inserting this and then using an instance of User, just create the mock:
// this one doesn't do anything yet, returns null on every method.
$user = $this->getMockBuilder('User')->disableOriginalConstructor()->get_mock();
// now just make it return what you want it to return
$user->method('get_data')->willReturn(array(1,2));
// And run your test lightning fast without having ever touched the database but getting the same result :)
$this->assertTrue(some_method_or_function($user);
Another hidden ( but valuable ) benefit of this approach is, that setting up the mocks and such actually forces you about the details that go into every classes behavior, giving you a significantly more detailed understanding of your app in the end.
Obviously the downside is that it (not always but often) requires a lot more work to code your tests this way and the benefit might not be worth the trouble.
Especially when working with other frameworks like WordPress and such that your code depends on, it might be somewhat unfeasible to really mock all db interaction, while existing libraries provide slower but trivial to implement database testing capabilities for your code :)
But in general option 3 is the way to go, option one is just wrong and option two might be what everyone eventually does in real life :D

Related

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

TDD and Test Data

I'm new to TDD and was wondering where to begin. I've read about as much as I think I can without throwing up and am still confused what to test and what not to test. For example, I know, from reading, we should not run tests against databases, thus enters the mocking framework. We mock our repositories so they will return fake data. My question is do we test requirements for data constants? For example, a requirement may state, a person should always have a name, thus:
Assert.IsNotNull(personObject.Name);
Should always be true, but how do I test it without have "fake" data? Do I care to test that type of requirement?
Let's take your requirement "a person should always have a name". Where could we start?
First, we need some clarity. I believe that when you say "should always have a name", you mean "should never have a null or empty string" as a name.
What we probably really mean here is "when a person is stored in our database, its name cannot be null or empty". The first line of attack would be to enforce that with a constraint in your database; however, nothing protects you against a rogue DBA removing that constraint, so you may want to test that what you think is true about the system, will be flagged by a failing test if it changes. For this purpose you would write a test like "when my app sends a Person with a Null name to be saved to the DB, it should fail miserably". That's not a unit test, it is more of an integration test - and it is more complicated to write than a good old unit test.
However, this still doesn't cover the scenario of a rogue DBA removing the constraint and directly creating records which have null names. In other words, your app cannot trust the data it gets back to be correct, so the question becomes: how do you want your domain to deal with the possibility of persons with null names?
A very literal approach could be to enforce that Person can only be constructed with non-null Name, and throws otherwise. That would be easy to unit test and enforce, but will probably make development painful. A more pleasant approach would be to have no constraints on Person, but a Validator class, which can Validate a Person for broken rules. This is more palatable because you could now do anything you wish to a Person (figuratively) and then Validate whether that Person is still in a Valid state.
This has the benefit of
1) being very easily testable: creating a Unit Test for such a validator is a piece of cake,
2) addressing the rogue DBA issue: you can now Validate everything that comes from or goes to the outside of the app, by applying the Validator.
So, being the lazy developer that I am, this is where I would start: go with the Validator, because it addresses my problem, while being much easier to test than something involving the data. In other words, in general I tend to stick as much as possible with Unit Tests (i.e. completely in memory), and to have as much as I can of my business logic in the code / domain, because it's easier to have all in one place, and easier to test.
You can use acceptance or integration testing to cover your database and constraints.
For instance, on one project, we had a separate "integration" package with tests which just checked our NHibernate bindings. If you're not using NHibernate, you could just do this with your repositories, keeping the database connections in place. You can verify constraints, referential integrity, etc. from this level.
If you're still looking for guidance on other aspects of TDD, try Dan North's post on BDD, which starts:
"I had a problem... programmers wanted to know where to start, what to test and what not to test, how much to test in one go, what to call their tests, and how to understand why a test fails."

TDD: Where to start the first test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I've done unit testing some, and have experience writing tests, but I have not fully embraced TDD as a design tool.
My current project is to re-work an existing system that generates serial numbers as part of the companies assembly process. I have an understanding of the current process and workflow due to looking at the existing system. I also have a list of new requirements and how they are going to modify the work flow.
I feel like I'm ready to start writing the program and I've decided to force myself to finally do TDD from the start to end.
But now I have no idea where to start. (I also wonder if I'm cheating the TDD process by already have an idea of the program flow for the user.)
The user flow is really serial and is just a series of steps. As an example, the first step would be:
user submits a manufacturing order number and receives a list of serializable part numbers of that orders bill of materials
The next step is started when the user selects one of the part numbers.
So I was thinking I can use this first step as a starting point. I know I want a piece of code that takes a manufacturing order number and returns a list of part numbers.
// This isn't what I'd want my code to end up looking like
// but it is the simplest statement of what I want
IList<string> partNumbers = GetPartNumbersForMfgOrder(string mfgOrder);
Reading Kent Becks example book he talks about picking small tests. This seems like a pretty big black box. Its going to require a mfg order repository and I have to crawl a product structure tree to find all applicable part numbers for this mfg order and I haven't even defined my domain model in code at all.
So on one hand that seems like a crappy start - a very general high level function. On the other hand, I feel like if I start at a lower level I'm really just guessing what I might need and that seems anti-TDD.
As a side note... is this how you'd use stories?
As an assembler
I want to get a list of part numbers on a mfg order
So that I can pick which one to serialize
To be truthful, an assembler would never say that. All an assembler wants is to finish the operation on mfg order:
As an assembler
I want to mark parts with a serial number
So that I can finish the operation on the mfg order
Here's how I would start. Lets suppose you have absolutely no code for this application.
Define the user story and the business value that it brings: "As a User I want to submit a manufacturing order number and a list of part numbers of that orders so that I can send the list to the inventory system"
start with the UI. Create a very simple page (lets suppose its a web app) with three fields: label, list and button. That's good enough, isn't it? The user could copy the list and send to the inv system.
Use a pattern to base your desig on, like MVC.
Define a test for your controller method that gets called from the UI. You're testing here that the controller works, not that the data is correct: Assert.AreSame(3, controller.RetrieveParts(mfgOrder).Count)
Write a simple implementation of the controller to make sure that something gets returned: return new List<MfgOrder>{new MfgOrder(), new MfgOrder(), new MfgOrder()}; You'll also need to implement classes for MfgOrder, for example.
Now your UI is working! Working incorrectly, but working. So lets expect the controller to get the data from a service or DAO. Create a Mock DAO object in the test case, and add an expectation that the method "partsDao.GetPartsInMfgOrder()" is called.
Create the DAO class with the method. Call the method from the controller. Your controller is now done.
Create a separate test to test the DAO, finally making sure it returns the proper data from the DB.
Keep iterating until you get it all done. After a little while, you'll get used to it.
The main point here is separating the application in very small parts, and testing those small parts individually.
This is perfectly okay as a starting test. With this you define expected behavior - how it should work. Now if you feel you've taken a much bigger bite than you'd have liked.. you can temporarily ignore this test and write a more granular test that takes out part or atleast mid-way. Then other tests that take you towards the goal of making the first big test pass. Red-Green-Refactor at each step.
Small tests, I think mean that you should not be testing a whole lot of stuff in one test. e.g. Are components D.A, B and C in state1, state2 and state3 after I've called Method1(), Method2() and Method3() with these parameters on D.
Each test should test just one thing. You can search SO for qualities of good tests. But I'd consider your test to be a small test because it is short and focussed on one task - 'Getting PartNumbers From Manufacturing Order'
Update: As a To-Try suggestion (AFAIR from Beck's book), you may wanna sit down and come up with a list of one-line tests for the SUT on a piece of paper. Now you can choose the easiest (tests that you're confident that you'll be able to get done.) in order to build some confidence. OR you could attempt one that you're 80% confident but has some gray areas (my choice too) because it'll help you learn something about the SUT along the way. Keep the ones that you've no idea of how to proceed for the end... hopefully it'll be clearer by the time the easier ones are done. Strike them off one by one as and when they turn green.
I think you have a good start but don't quite see it that way. The test that is supposed to spawn more tests makes total sense to me as if you think about it, do you know what a Manufacturing Order number or a Part Number is yet? You have to build those possibly which leads to other tests but eventually you'll get down to the itty bitty tests I believe.
Here's a story that may require a bit of breaking down:
As a User I want to submit a manufacturing order number and receive a list of serializable part numbers of that orders bill of materials
I think the key is to break things down over and over again into tiny pieces that make it is to build the whole thing. That "Divide and conquer" technique is handy at times. ;)
Well well, you've hit the exact same wall I did when I tried TDD for the first time :)
Since then, I gave up on it, simply because it makes refactoring too expensive - and I tend to refactor a lot during the initial stage of development.
With those grumpy words out of the way, I find that one of the most overseen and most important aspects of TDD is that it forces you to define your class-interfaces before actually implementing them. That's a very good thing when you need to assemble all your parts into one big product (well, into sub-products ;) ). What you need to do before writing your first tests, is to have your domain model, deployment model and preferably a good chunk of your class-diagrams ready before coding - simply because you need to identify your invariants, min- and max-values etc., before you can test for them. You should be able to identify these on a unit-testing level from your design.
Soo, in my experience (not in the experience of some author who enjoys mapping real world analogies to OO :P ), TDD should go like this:
Create your deployment diagram, from the requirement specification (ofc, nothing is set in stone - ever)
Pick a user story to implement
Create or modify your domain model to include this story
Create or modify your class-diagram to include this story (including various design classes)
Identify test-vectors.
Create the tests based on the interface you made in step 4
Test the tests(!). This is a very important step..
Implement the classes
Test the classes
Go have a beer with your co-workers :)

Is it bad practice to run tests on a database instead of on fake repositories?

I know what the advantages are and I use fake data when I am working with more complex systems.
What if I am developing something simple and I can easily set up my environment in a real database and the data being accessed is so small that the access time is not a factor, and I am only running a few tests.
Is it still important to create fake data or can I forget the extra coding and skip right to the real thing?
When I said real database I do not mean a production database, I mean a test database, but using a real live DBMS and the same schema as the real database.
The reasons to use fake data instead of a real DB are:
Speed. If your tests are slow you aren't going to run them. Mocking the DB can make your tests run much faster than they otherwise might.
Control. Your tests need to be the sole source of your test data. When you use fake data, your tests choose which fakes you will be using. So there is no chance that your tests are spoiled because someone left the DB in an unfamiliar state.
Order Independence. We want our tests to be runnable in any order at all. The input of one test should not depend on the output of another. When your tests control the test data, the tests can be independent of each other.
Environment Independence. Your tests should be runnable in any environment. You should be able to run them while on the train, or in a plane, or at home, or at work. They should not depend on external services. When you use fake data, you don't need an external DB.
Now, if you are building a small little application, and by using a real DB (like MySQL) you can achieve the above goals, then by all means use the DB. I do. But make no mistake, as your application grows you will eventually be faced with the need to mock out the DB. That's OK, do it when you need to. YAGNI. Just make sure you DO do it WHEN you need to. If you let it go, you'll pay.
It sort of depends what you want to test. Often you want to test the actual logic in your code not the data in the database, so setting up a complete database just to run your tests is a waste of time.
Also consider the amount of work that goes into maintaining your tests and testdatabase. Testing your code with a database often means your are testing your application as a whole instead of the different parts in isolation. This often result in a lot of work keeping both the database and tests in sync.
And the last problem is that the test should run in isolation so each test should either run on its own version of the database or leave it in exactly the same state as it was before the test ran. This includes the state after a failed test.
Having said that, if you really want to test on your database you can. There are tools that help setting up and tearing down a database, like dbunit.
I've seen people trying to create unit test like this, but almost always it turns out to be much more work then it is actually worth. Most abandoned it halfway during the project, most abandoning ttd completely during the project, thinking the experience transfer to unit testing in general.
So I would recommend keeping tests simple and isolated and encapsulate your code good enough it becomes possible to test your code in isolation.
As far as the Real DB does not get in your way, and you can go faster that way, I would be pragmatic and go for it.
In unit-test, the "test" is more important than the "unit".
I think it depends on whether your queries are fixed inside the repository (the better option, IMO), or whether the repository exposes composable queries; for example - if you have a repository method:
IQueryable<Customer> GetCustomers() {...}
Then your UI could request:
var foo = GetCustomers().Where(x=>SomeUnmappedFunction(x));
bool SomeUnmappedFunction(Customer customer) {
return customer.RegionId == 12345 && customer.Name.StartsWith("foo");
}
This will pass for an object-based fake repo, but will fail for actual db implementations. Of course, you can nullify this by having the repository handle all queries internally (no external composition); for example:
Customer[] GetCustomers(int? regionId, string nameStartsWith, ...) {...}
Because this can't be composed, you can check the DB and the UI independently. With composable queries, you are forced to use integration tests throughout if you want it to be useful.
It rather depends on whether the DB is automatically set up by the test, also whether the database is isolated from other developers.
At the moment it may not be a problem (e.g. only one developer). However (for manual database setup) setting up the database is an extra impediment for running tests, and this is a very bad thing.
If you're just writing a simple one-off application that you absolutely know will not grow, I think a lot of "best practices" just go right out the window.
You don't need to use DI/IOC or have unit tests or mock out your db access if all you're writing is a simple "Contact Us" form. However, where to draw the line between a "simple" app and a "complex" one is difficult.
In other words, use your best judgment as there is no hard-and-set answer to this.
It is ok to do that for the scenario, as long as you don't see them as "unit" tests. Those would be integration tests. You also want to consider if you will be manually testing through the UI again and again, as you might just automated your smoke tests instead. Given that, you might even consider not doing the integration tests at all, and just work at the functional/ui tests level (as they will already be covering the integration).
As others as pointed out, it is hard to draw the line on complex/non complex, and you would usually now when it is too late :(. If you are already used to doing them, I am sure you won't get much overhead. If that were not the case, you could learn from it :)
Assuming that you want to automate this, the most important thing is that you can programmatically generate your initial condition. It sounds like that's the case, and even better you're testing real world data.
However, there are a few drawbacks:
Your real database might not cover certain conditions in your code. If you have fake data, you cause that behavior to happen.
And as you point out, you have a simple application; when it becomes less simple, you'll want to have tests that you can categorize as unit tests and system tests. The unit tests should target a simple piece of functionality, which will be much easier to do with fake data.
One advantage of fake repositories is that your regression / unit testing is consistent since you can expect the same results for the same queries. This makes it easier to build certain unit tests.
There are several disadvantages if your code (if not read-query only) modifies data:
- If you have an error in your code (which is probably why you're testing), you could end up breaking the production database. Even if you didn't break it.
- if the production database changes over time and especially while your code is executing, you may lose track of the test materials that you added and have a hard time later cleaning it out of the database.
- Production queries from other systems accessing the database may treat your test data as real data and this can corrupt results of important business processes somewhere down the road. For example, even if you marked your data with a certain flag or prefix, can you assure that anyone accessing the database will adhere to this schema?
Also, some databases are regulated by privacy laws, so depending on your contract and who owns the main DB, you may or may not be legally allowed to access real data.
If you need to run on a production database, I would recommend running on a copy which you can easily create during of-peak hours.
It's a really simple application, and you can't see it growing, I see no problem running your tests on a real DB. If, however, you think this application will grow, it's important that you account for that in your tests.
Keep everything as simple as you can, and if you require more flexible testing later on, make it so. Plan ahead though, because you don't want to have a huge application in 3 years that relies on old and hacky (for a large application) tests.
The downsides to running tests against your database is lack of speed and the complexity for setting up your database state before running tests.
If you have control over this there is no problem in running the tests directly against the database; it's actually a good approach because it simulates your final product better than running against fake data. The key is to have a pragmatic approach and see best practice as guidelines and not rules.

TDD ...how?

I'm about to start out my first TDD (test-driven development) program, and I (naturally) have a TDD mental block..so I was wondering if someone could help guide me on where I should start a bit.
I'm creating a function that will read binary data from socket and parses its data into a class object.
As far as I see, there are 3 parts:
1) Logic to parse data
2) socket class
3) class object
What are the steps that I should take so that I could incrementally TDD? I definitely plan to first write the test before even implementing the function.
The issue in TDD is "design for testability"
First, you must have an interface against which to write tests.
To get there, you must have a rough idea of what your testable units are.
Some class, which is built by a function.
Some function, which reads from a socket and emits a class.
Second, given this rough interface, you formalize it into actual non-working class and function definitions.
Third, you start to write your tests -- knowing they'll compile but fail.
Part-way through this, you may start head-scratching about your function. How do you set up a socket for your function? That's a pain in the neck.
However, the interface you roughed out above isn't the law, it's just a good idea. What if your function took an array of bytes and created a class object? This is much, much easier to test.
So, revisit the steps, change the interface, write the non-working class and function, now write the tests.
Now you can fill in the class and the function until all your tests pass.
When you're done with this bit of testing, all you have to do is hook in a real socket. Do you trust the socket libraries? (Hint: you should) Not much to test here. If you don't trust the socket libraries, now you've got to provide a source for the data that you can run in a controlled fashion. That's a large pain.
Your split sounds reasonable. I would consider the two dependencies to be the input and output. Can you make them less dependent on concrete production code? For instance, can you make it read from a general stream of data instead of a socket? That would make it easier to pass in test data.
The creation of the return value could be harder to mock out, and may not be a problem anyway - is the logic used for the actual population of the resulting object reasonably straightforward (after the parsing)? For instance, is it basically just setting trivial properties? If so, I wouldn't bother trying to introduce a factory etc there - just feed in some test data and check the results.
First, start thinking "the testS", plural, rather than "the test", singular. You should expect to write more than one.
Second, if you have mental block, consider starting with a simpler challenge. Lower the difficulty until it's real easy to do, then move on to more substantial work.
For instance, assume you already have a byte array with the binary data, so you don't even need to think about sockets. All you need to write is something that takes in a byte[] and return an instance of your object. Can you write a test for that ?
If you still have mental block, lower it yet another notch. Assume your byte array is only going to contain default values anyway. So you don't even have to worry about the parsing, just about being able to return an instance of your object that has all values set to the defaults. Can you write a test for that ?
I imagine something like:
public void testFooReaderCanParseDefaultFoo() {
FooReader fr = new FooReader();
Foo myFoo = fr.buildFoo();
assertEquals(0, myFoo.bar());
}
That's rock bottom, right ? You're only testing the Foo constructor. But you can then move up to the next level:
public void testFooReaderGivenBytesBuildsFoo() {
FooReader fr = new FooReader();
byte[] fooData = {1};
fr.consumeBytes(fooData);
Foo myFoo = fr.buildFoo();
assertEquals(1, myFoo.bar());
}
And so on...
'The best testing framework is the application itself'
I believe that a common misconception amongst developers is, they mistakenly make a strong association between testing frameworks and TDD principles. I would advise re-reading the official docs on TDD; bearing in mind that, there is no real relationship between testing frameworks and TDD. After all, TDD is a paradigm not a framework.
Upon reading the wiki on TDD (https://en.wikipedia.org/wiki/Test-driven_development), I've come to realise that to an extent things are a little bit open to interpretation.
There are various personal styles of TDD mainly due to the fact that TDD principles are open to interpretation.
I'm not here to say anyone is wrong, but I would like to share my techniques with you and explain how they have served me well. Bear in mind that I have been programming for 36 years; making my programming habits very well evolved.
Code reuse is over rated. Reuse code too much and you'll end up with bad abstraction and it will become very difficult to fix or change something without it affecting something else. The obvious advantage being less code to manage.
Repeating too much code leads to code management problems and oversized code bases. However it does have the advantage of good separation of concerns (the ability to tweak, change and fix things without affecting other parts of the app).
Don't repeat/refactor too much, don't reuse too much. Code needs to be maintainable. It’s important to understand and respect the balance between code reuse and abstraction/separation of concerns.
When deciding whether to reuse code I base the decision on: .... Will the nature of this code change in context throughout the app codebase? If the answer is no, then I reuse it. If the answer is yes or I'm not sure, I repeat/refactor it. I will however revise my codebases from time to time and see if any of my repeated code can be merged without compromising separation of concerns/abstraction.
As far as my basic programming habits are concerned, I like to write the conditions (if then else switch case etc) first; test them, then fill the conditions with the code and test again. Keep in mind there's no rule that you have to do this in a unit test. I refer to this as the low level stuff.
Once my low level stuff is done, I'll either reuse the code or refactor it into another part of the app, but not after testing it very thoroughly. Problem with repeating/refactoring badly tested code is that, if it’s broken, you have to fix it in multiple places.
BDD To me is a natural follow on from TDD. Once my code base is well tested I can easily tweak behaviours by moving entire blocks of code around. Cool thing is about my programming habits is that sometimes I move code around and discover useful behaviours that I didn’t even intend. It can sometimes even be useful for rebranding stuff to seem like a completely different code base.
To this end my code bases tend to start out a bit slow and pick up momentum because as I advance toward the end of development I have more and more code to refactor from or reuse.
The advantages for me in the way that I code is that, I am able to take on very high levels of complexity as this is promoted by good separation of concerns. It’s also awesome for writing highly optimised code. However the well optimised code tends to be a bit bloated, but to my knowledge there is no way to write optimized code without a bit of bloating. If the app doesn't need high processor efficiency, there's nothing stopping me from de-bloating my code. I'm of the opinion that server side code should be optimised and most client side code normally doesn't require it.
Going back to the topic of testing frameworks, I use them to just save a bit of compiler time.
As far as following story boards is concerned, that comes naturally to me without actually considering it. I've noticed most devs develop in the natural order of story boards even when they are not available.
As a general separation of concerns strategy, in most apps I separate concerns based on UI forms. For example I’ll reuse code within a form and repeat/refactor across forms. This is only a generalistic rule. There are times when I have to think outside the box. Sometimes repeating code can serve well for making code processor efficient.
As a little addendum to my TDD habits; I do optimizations and fault tolerance last. I will try to avoid using try catch blocks as much as possible and write my code in such a way as to not need them. For example rather than catch a null, I will check for null, or rather than catch an index out of bounds, I will scrutinise my code so that it never happens. I find that error trapping too early in app development, leads to semantic errors (behavioural errors that don't crash the app). Semantic errors can be very hard to trace or even notice.
Well that’s my 10 cents. Hope it helps.
Test Driven Development ?
So, this means you should start with writing a test first.
Write a test which contains the code like 'how you want to use your class'. This class or method that you are going to test with this test, is not even there yet.
For instance, you could write a test first like this:
[Test]
public void CanReadDataFromSocket()
{
SocketReader r = new SocketReader( ... ); // create a socketreader instance which takes a socket or a mock-socket in its constructor
byte[] data = r.Read();
Assert.IsTrue (data.Length > 0);
}
For instance; I'm just making up an example here.
Next, once you're able to read data from a socket, you can start thinking on how you'll parse it, and write a test in where you use the 'Parser' class which takes the data that you've read, and outputs an instance of your data class.
etc...
Knowing where to start writing tests and when to stop writing tests while using TDD, is a common problem when starting out.
I have found that it can sometimes help to write an integration test first. Doing so will help create some of the common objects you will be using. It will also allow you to focus your thoughts and tests, since you will need to start writing tests to make the integration test pass.
When I was starting with TDD, I read these 3 rules by Uncle Bob that really helped me out:
You are not allowed to write any production code unless it is to
make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In a shorter version it would be:
Write only enough of a unit test to fail.
Write only enough production code to make the failing unit test pass.
as you can see, this is very simple.

Resources