In the TDD how do you write tests for code that inherently have side effects? - tdd

If a function's side effects are inherent within the design how do I develop such a function?
For instance if I wanted to implement a function like http.get( "url" ), and I stubbed the side effects as a service with dependency injection it would look like:
var http = {
"get": function( url, service ) {
return promise(function( resolve ) {
service( url ).then(function( Response ) {
resolve( Response );
});
});
}
}
...but I would then need to implement the service which is identical to the original http.get(url) and therefore would have the same side effects and therefore put me in a development loop. Do I have to mock a server to test such a function and if so what part of the TDD development cycle does that fall under? Is it integration testing, or is it still unit testing?
Another example would be a model for a database. If I'm developing code that works with a database, I'll design an interface, abstract a model implementing that interface, and pass it into my code using dependency injection. As long as my model implements the interface I can use any database and easily stub it's state and responses to implement TDD for other functions which interact with a database. What about that model though? It's going to interact with a database- it seems like that side effect is inherent within the design, and abstracting it away puts me into a development loop when I go to implement that abstraction. How do I implement the model's methods without being able to abstract them away?

In the TDD how do you write tests for code that inherently have side effects?
I don't think I've seen a particularly clear answer for this anywhere; the closest is probably GOOS -- the "London" school of TDD tends to be focused outside in.
But broadly, you need to have a sense that side effects belong in the imperative shell. They are usually implemented within an infrastructure component. So you'll typically want a higher level abstraction that you can pass to the functional part of your system.
For example, reading the system clock is a side effect, producing a time since epoch value. Most of your system shouldn't care where the time comes from, so the abstraction of reading the clock should be an input to the system.
Now, it can feel like "turtles all the way down" -- how do you test your interaction with the infrastructure? Kent Beck describes a stopping condition
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence....
I tend to lean on Hoare's observation
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies
Once you get down to an implementation of a side effect that is obviously correct, you stop worrying about it.
When you are staring at a side effect, and the implementation is not obviously correct, you start looking for ways to pull the hard part back into the functional core, further isolating the side effect.
The actual testing of the side effects typically happens when you start wiring all of the components together. Because of the side effects, these tests are typically slower; because they share mutable state, you often need to ensure that they are running sequentially.

If you are writing unit test on a module like that, focus on that module itself, not on the dependency. For example, how is it supposed to react to a db/service being down, or throwing exception/error, returning null data, returning good data, etc. That's why you mock them and return different values or set different behavior like throwing exception.

Related

PHPUnit - Creating tests after development

I've watched and read a handful of tutorials on PHPUnit and Test Driven Development and have recently begun working with Laravel which extends the PHPUnit Framework with it's TestCase class. All of these things make sense to me, as far as, creating tests as you develop. And I find Laravel's extensions particularly intuitive (especially in regards to testing Controller routes)
However, I've recently been tasked with creating unit tests for a sizable app that's near completion. The app is built in Codeigniter, and it was not built with any tests
I find that I'm not entirely sure where to begin, or what steps to take in order to determine the tests I should create.
Should I be looking to test each controller method? Or do I need to break it down more than that? Admittedly, many of these controller methods are doing more than one task.
It is really difficult to write tests for existing project. I will suggest you to first start with writing tests for classes which are not dependent on other classes. Then you can continue to write tests to classes which coupled with classes for which you wrote tests. You will increase your test coverage step by step by repeating this process.
Also don't forget that some times you will need to refactor your code to make it testable. You should improve design of code for example if your controller methods doing more than one task you should divide this method to sub methods and test each of these methods independently.
I also will suggest you to look at this question
You are in a bit of a tight spot, but here is what I would do in your situation. You need to refactor (ie. change) the existing code so that you end up with three types of functions.
The first type are those that deal with the outside world. By this I mean anything that talks to I/O, or your framework or your operating system or even libraries or code from stable modules. Basically everything that has a dependency on code that you can not, or may not change.
The second group of functions are where you transform or create data structures. The only thing they should know about are the data structures that they receive as parameters and the only way they communicate back is by changing those structures or by creating and populating a new structure.
The third group consists of co-ordinating functions which make the calls to the outside world functions, get their returned data structures and pass those structures to the transforming functions.
Your testing strategy is then as follows: the second group can be tested by creating fake data structures, passing them in and checking that the transforms were done correctly. The third group of co-ordinating functions can be tested by dependency injection and mocking to see that they call the outside world and transform functions correctly. Finally the last group of functions should not be tested. You follow the maxim - "make it so simple that their is obviously nothing wrong". See if you can keep it to a single line of code. If you go over four lines of code for these then you are probably doing it wrong.
If you are completely new to TDD I do however strongly suggest that you first get used to doing it on green field projects/modules. I made a couple of false starts on unit testing because I tried to bolt it onto projects afterwards. TDD is really a joy when you finally grok it so it would not be good if you get discouraged early on because of a too steep learning curve.

Interface Insanity

I'm drinking the coolade and loving it - interfaces, IoC, DI, TDD, etc. etc. Working out pretty well. But I'm finding I have to fight a tendency to make everything an interface! I have a factory which is an interface. Its methods return objects which could be interfaces (might make testing easier). Those objects are DI'ed interfaces to the services they need. What I'm finding is that keeping the interfaces in sync with the implementations is adding to the work - adding a method to a class means adding it to the class + the interface, mocks, etc.
Am I factoring the interfaces out too early? Are there best practices to know when something should return an interface vs. an object?
Interfaces are useful when you want to mock an interaction between an object and one of its collaborators. However there is less value in an interface for an object which has internal state.
For example, say I have a service which talks to a repository in order to extract some domain object in order to manipulate it in some way.
There is definite design value in extracting an interface from the repository. My concrete implementation of the repository may well be strongly linked to NHibernate or ActiveRecord. By linking my service to the interface I get a clean separation from this implementation detail. It just so happens that I can also write super fast standalone unit tests for my service now that I can hand it a mock IRepository.
Considering the domain object which came back from the repository and which my service acts upon, there is less value. When I write test for my service, I will want to use a real domain object and check its state. E.g. after the call to service.AddSomething() I want to check that something was added to the domain object. I can test this by simple inspection of the state of the domain object. When I test my domain object in isolation, I don't need interfaces as I am only going to perform operations on the object and quiz it on its internal state. e.g. is it valid for my sheep to eat grass if it is sleeping?
In the first case, we are interested in interaction based testing. Interfaces help because we want to intercept the calls passing between the object under test and its collaborators with mocks. In the second case we are interested in state based testing. Interfaces don't help here. Try to be conscious of whether you are testing state or interactions and let that influence your interface or no interface decision.
Remember that (providing you have a copy of Resharper installed) it is extremely cheap to extract an interface later. It is also cheap to delete the interface and revert to a simpler class hierarchy if you decide that you didn't need that interface after all. My advice would be to start without interfaces and extract them on demand when you find that you want to mock the interaction.
When you bring IoC into the picture, then I would tend to extract more interfaces - but try to keep a lid on how many classes you shove into your IoC container. In general, you want to keep these restricted to largely stateless service objects.
Sounds like you're suffering a little from BDUF.
Take it easy with the coolade and let it flow naturally.
Remember that while flexibility is a worthy objective, added flexibility with IoC and DI (which to some extent are requirements for TDD) also increases complexity. The only point of flexibility is to make changes downstream quicker, cheaper or better. Each IoC/DI point increases complexity, and thus contributes to making changes elsewhere more difficult.
This is actually where you need a Big Design Up Front to some extent: identify what areas are most likely to change (and/or need extensive unit testing), and plan for flexibility there. Refactor to eliminate flexibility where changes are unlikely.
Now, I'm not saying that you can guess where flexibility will be needed with any kind of accuracy. You'll be wrong. But it's likely that you'll get something right. Where you later find you don't need flexibility, it can be factored out in maintenance. Where you need it, it can be factored in when adding features.
Now, areas which may or may not change depends on your business problem and IT environment. Here are some recurring areas.
I'd always consider external
interfaces where you integrate to
other systems to be highly mutable.
Whatever code provides a back end to
the user interface will need to support change in the UI. However, plan for changes in functionality primarily: don't go overboard and plan for different UI technologies (such as supporting both a smart client and a web application – usage patterns will differ too much).
On the other hand, coding for
portability to different databases
and platforms is usually a waste
of time at least in corporate
environments. Ask around and check
what plans may exist to replace or
upgrade technologies within the
likely lifespan of your software.
Changes to data content and formats are a tricky
business: while data will
occasionally change, most designs
I've seen handle such changes
poorly, and thus you get concrete
entity classes used directly.
But only you can make the judgement of what might or should not change.
I usually find that I want interfaces for "services" - whereas types which are primarily about "data" can be concrete classes. For instance, I'd have an Authenticator interface, but a Contact class. Of course, it's not always that clear-cut, but it's an initial rule of thumb.
I do feel your pain though - it's a little bit like going back to the dark days of .h and .c files...
I think the most important "agile" principle is YAGNI ("You Ain't Gonna Need It"). In other words, don't write extra code until it's actually needed, because if you write it in advance the requirements and constraints may well have changed when(if!) you finally do need it.
Interfaces, dependency injections, etc. - all this stuff adds complexity to your code, making it harder to understand and change. My rule of thumb is to keep things as simple as possible (but no simpler) and to not add complexity unless it gains me more than enough to offset the burden it imposes.
So if you are actually testing and having a mock object would be very useful then by all means define an interface that both your mock and real classes implement. But don't create a bunch of interfaces on the purely hypothetical grounds that it might be useful at some point, or that it is "correct" OO design.
Interfaces have the purpose of establishing a contract and are particularly useful when you want to change the class performing a task on the fly. If there is no need to change the classes an interface just might get in the way.
There are automatic tools to extract the interface, so maybe you'd better delay the interface extraction later in the process.
It depends very much on what you are providing... if you are working on internal things then the advice of "don't do them until needed" is reasonable. If, however, you are making an API that is to be consumed by other developers then changing things around to interfaces at a later date can be annoying.
A good rule of thumb is to make interfaces out of anything that needs to be subclasses. This is not an "always make an interface in that case" sort of thing, you still need to think about it.
So, the short answer is (and this works with both internal things and providing an API) is that if you anticipate more than one implementation is going to be needed then make it an interface.
Somethings that generally would not be interfaces would be classes that only hold data, like say a Location class that deals with x and y. THe odds of there being another implementation of that is slim.
Do not create the interfaces first - you ain't gonna need them. You cannot guess for which classes you'll need an interface of for which classes you don't. Therefore do not spend any time to burden the code with useless interface now.
But extract them when you feel the urge to do so - when you see the need of an interface - at the refactoring step.
Those answers can help too.

Performance of using static methods vs instantiating the class containing the methods

I'm working on a project in C#. The previous programmer didn't know object oriented programming, so most of the code is in huge files (we're talking around 4-5000 lines) spread over tens and sometimes hundreds of methods, but only one class. Refactoring such a project is a huge undertaking, and so I've semi-learned to live with it for now.
Whenever a method is used in one of the code files, the class is instantiated and then the method is called on the object instance.
I'm wondering whether there are any noticeable performance penalties in doing it this way? Should I make all the methods static "for now" and, most importantly, will the application benefit from it in any way?
From here, a static call is 4 to 5 times faster than constructing an instance every time you call an instance method. However, we're still only talking about tens of nanoseconds per call, so you're unlikely to notice any benefit unless you have really tight loops calling a method millions of times, and you could get the same benefit by constructing a single instance outside that loop and reusing it.
Since you'd have to change every call site to use the newly static method, you're probably better spending your time on gradually refactoring.
I have dealt with a similar problem where i work. The programmer before me created 1 controller class where all the BLL functions were dumped.
We are redesigning the system now and have created many Controller classes depending on what they are supposed to control e.g.
UserController, GeographyController, ShoppingController...
Inside each controller class they have static methods which make calls to cache or the DAL using the singleton pattern.
This has given us 2 main advantages. It is slightly faster(around 2-3 times faster but were talking nanoseconds here ;P). The other is that the code is much cleaner
i.e
ShoppingController.ListPaymentMethods()
instead of
new ShoppingController().ListPaymentMethods()
I think it makes sense to use static methods or classes if the class doesn't maintain any state.
It depends on what else that object contains -- if the "object" is just a bunch of functions then it's probably not the end of the world. But if the object contains a bunch of other objects, then instantiating it is gonna call all their constructors (and destructors, when it's deleted) and you may get memory fragmentation and so on.
That said, it doesn't sound like performance is your biggest problem right now.
You have to determine the goals of the rewrite. If you want to have nice testable, extendable & maintainable OO code then you could try to use objects and their instance methods. After all this is Object Oriented programing we're talking about here, not Class Oriented programming.
It is very straightforward to fake and/or mock objects when you define classes that implement interfaces and you execute instance methods. This makes thorough unit testing quick and effective.
Also, if you are to follow good OO principles (see SOLID at http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29 ) and/or use design patterns you will certainly be doing a lot of instance based, interface based development, and not using many static methods.
As for this suggestion:
It seems silly to me to create an object JUST so you can call a method which
seemingly has no side effects on the object (from your description i assume this).
I see this a lot in dot net shops and to me this violates encapsulation, a key OO concept. I should not be able to tell whether a method has side effects by whether or not the method is static. As well as breaking encapsulation this means that you will need to be changing methods from static to instance if/when you modify them to have side effects. I suggest you read up on the Open/Closed principle for this one and see how the suggested approach, quoted above, works with that in mind.
Remember that old chestnut, 'premature optimization is the root of all evil'. I think in this case this means don't jump through hoops using inappropriate techniques (i.e. Class Oriented programming) until you know you have a performance issue. Even then debug the issue and look for the most appropriate.
Static methods are a lot faster and uses a lot less memory. There is this misconception that it's just a little faster. It's a little faster as long as you don't put it on loops. BTW, some loops look small but really aren't because the method call containing the loop is also another loop. You can tell the difference in code that performs rendering functions. A lot less memory is unfortunately true in many cases. An instance allows easy sharing of information with sister methods. A static method will ask for the information when he needs it.
But like in driving cars, speed brings responsibility. Static methods usually have more parameters than their instance counterpart. Because an instance would take care of caching shared variables, your instance methods will look prettier.
ShapeUtils.DrawCircle(stroke, pen, origin, radius);
ShapeUtils.DrawSquare(stroke, pen, x, y, width, length);
VS
ShapeUtils utils = new ShapeUtils(stroke,pen);
util.DrawCircle(origin,radius);
util.DrawSquare(x,y,width,length);
In this case, whenever the instance variables are used by all methods most of the time, instance methods are pretty worth it. Instances are NOT ABOUT STATE, it's about SHARING although COMMON STATE is a natural form of SHARING, they are NOT THE SAME. General rule of thumb is this: if the method is tightly coupled with other methods --- they love each other so much that they when one is called, the other needs to be called too and they probably share the same cup of water---, it should be made instance. To translate static methods into instance methods is not that hard. You only need to take the shared parameters and put them as instance variables. The other way around is harder.
Or you can make a proxy class that will bridge the static methods. While it may seem to be more inefficient in theory, practice tells a different story. This is because whenever you need to call a DrawSquare once (or in a loop), you go straight to the static method. But whenever you are gonna use it over and over along with DrawCircle, you are gonna use the instance proxy. An example is the System.IO classes FileInfo (instance) vs File (static).
Static Methods are testable. In fact, even more testable than instance once. Method GetSum(x,y) would be very testable to not just unit test but load test, integrated test and usage test. Instance methods are good for units tests but horrible for every other tests (which matters more than units tests BTW) that is why we get so many bugs these days. The thing that makes ALL Methods untestable are parameters that don't make sense like (Sender s, EventArgs e) or global state like DateTime.Now. In fact, static methods are so good at testability that you see less bugs in C code of a new Linux distro than your average OO programmer (he's full of s*** I know).
I think you've partially answered this question in the way you asked it: are there any noticeable performance penalties in the code that you have?
If the penalties aren't noticeable, you needn't necessarily do anything at all. (Though it goes without saying the codebase would benefit dramtically from a gradual refactor into a respectable OO model).
I guess what I'm saying is, a performance problem is only a problem when you notice that it's a problem.
It seems silly to me to create an object JUST so you can call a method which seemingly has no side effects on the object (from your description i assume this). It seems to me that a better compromise would be to have several global objects and just use those. That way you can put the variables that normally would be global into the appropriate classes so that they have slightly smaller scope.
From there you can slowly move the scope of these objects to be smaller and smaller until you have a decent OOP design.
Then again, the approach that I would probably use is different ;).
Personally, I would likely focus on structures and the functions which operate on them and try to convert these into classes with members little by little.
As for the performance aspect of the question, static methods should be slightly faster (but not much) since they don't involve constructing, passing and deconstructing an object.
It's not valid in PHP,
Object Method is faster :
http://www.vanylla.it/tests/static-method-vs-object.php

Developing N-Tier App. In what direction?

Assuming the you are implementing a user story that requires changes in all layers from UI (or service facade) to DB.
In what direction do you move?
From UI to Business Layer to Repository to DB?
From DB to Repository to Business Layer to UI?
It depends. (On what ?)
The best answer I've seen to this sort of question was supplied by the Atomic Object guys and their Presenter First pattern. Basically it is an implementation of the MVP pattern, in which (as the name suggests) you start working from the Presenter.
This provides you with a very light-weight object (since the presenter is basically there to marshal data from the Model to the View, and events from the View to the Model) that can directly model your set of user actions. When working on the Presenter, the View and Model are typically defined as interfaces, and mocked, so your initial focus is on defining how the user is interacting with your objects.
I generally like to work in this way, even if I'm not doing a strict MVP pattern. I find that focusing on user interaction helps me create business objects that are easier to interact with. We also use Fitnesse in house for integration testing, and I find that writing the fixtures for Fitnesse while building out my business objects helps keep things focused on the user's perspective of the story.
I have to say, though, that you end up with a pretty interesting TDD cycle when you start with a failing Fitnesse test, then create a failing Unit Test for that functionality, and work your way back up the stack. In some cases I'm also writing Database unit tests, so there is another layer of tests that get to be written, failed, and passed, before the Fitnesse tests pass.
If change is likely, start in the front. You can get immediate feedback from shareholders. Who knows? Maybe they don't actually know what they want. Watch them use the interface (UI, service, or otherwise). Their actions might inspire you to view the problem in a new light. If you can catch changes before coding domain objects and database, you save a ton of time.
If requirements are rigid, it's not as important. Start in the layer that's likely to be the most difficult - address risk early. Ultimately, this is one of those "more an art than a science" issues. It's probably a delicate interplay between layer design that creates the best solution.
Cheers.
I'd do it bottom up, since you'll have some working results fast (i. e. you can write unit tests without a user interface, but can't test the user interface until the model is done).
There are other opinions, though.
I would start modeling the problem domain. Create relevant classes representing the entities of the system. Once I feel confident with that, I'd try to find a feasible mapping for persisting the entities to the database. If you put too much work into the UI before you have a model of the domain, there is a significant risk that you need to re-work the UI afterwards.
Thinking of it, you probably need to do some updates to all of the layers anyway... =)

TDD ...how?

I'm about to start out my first TDD (test-driven development) program, and I (naturally) have a TDD mental block..so I was wondering if someone could help guide me on where I should start a bit.
I'm creating a function that will read binary data from socket and parses its data into a class object.
As far as I see, there are 3 parts:
1) Logic to parse data
2) socket class
3) class object
What are the steps that I should take so that I could incrementally TDD? I definitely plan to first write the test before even implementing the function.
The issue in TDD is "design for testability"
First, you must have an interface against which to write tests.
To get there, you must have a rough idea of what your testable units are.
Some class, which is built by a function.
Some function, which reads from a socket and emits a class.
Second, given this rough interface, you formalize it into actual non-working class and function definitions.
Third, you start to write your tests -- knowing they'll compile but fail.
Part-way through this, you may start head-scratching about your function. How do you set up a socket for your function? That's a pain in the neck.
However, the interface you roughed out above isn't the law, it's just a good idea. What if your function took an array of bytes and created a class object? This is much, much easier to test.
So, revisit the steps, change the interface, write the non-working class and function, now write the tests.
Now you can fill in the class and the function until all your tests pass.
When you're done with this bit of testing, all you have to do is hook in a real socket. Do you trust the socket libraries? (Hint: you should) Not much to test here. If you don't trust the socket libraries, now you've got to provide a source for the data that you can run in a controlled fashion. That's a large pain.
Your split sounds reasonable. I would consider the two dependencies to be the input and output. Can you make them less dependent on concrete production code? For instance, can you make it read from a general stream of data instead of a socket? That would make it easier to pass in test data.
The creation of the return value could be harder to mock out, and may not be a problem anyway - is the logic used for the actual population of the resulting object reasonably straightforward (after the parsing)? For instance, is it basically just setting trivial properties? If so, I wouldn't bother trying to introduce a factory etc there - just feed in some test data and check the results.
First, start thinking "the testS", plural, rather than "the test", singular. You should expect to write more than one.
Second, if you have mental block, consider starting with a simpler challenge. Lower the difficulty until it's real easy to do, then move on to more substantial work.
For instance, assume you already have a byte array with the binary data, so you don't even need to think about sockets. All you need to write is something that takes in a byte[] and return an instance of your object. Can you write a test for that ?
If you still have mental block, lower it yet another notch. Assume your byte array is only going to contain default values anyway. So you don't even have to worry about the parsing, just about being able to return an instance of your object that has all values set to the defaults. Can you write a test for that ?
I imagine something like:
public void testFooReaderCanParseDefaultFoo() {
FooReader fr = new FooReader();
Foo myFoo = fr.buildFoo();
assertEquals(0, myFoo.bar());
}
That's rock bottom, right ? You're only testing the Foo constructor. But you can then move up to the next level:
public void testFooReaderGivenBytesBuildsFoo() {
FooReader fr = new FooReader();
byte[] fooData = {1};
fr.consumeBytes(fooData);
Foo myFoo = fr.buildFoo();
assertEquals(1, myFoo.bar());
}
And so on...
'The best testing framework is the application itself'
I believe that a common misconception amongst developers is, they mistakenly make a strong association between testing frameworks and TDD principles. I would advise re-reading the official docs on TDD; bearing in mind that, there is no real relationship between testing frameworks and TDD. After all, TDD is a paradigm not a framework.
Upon reading the wiki on TDD (https://en.wikipedia.org/wiki/Test-driven_development), I've come to realise that to an extent things are a little bit open to interpretation.
There are various personal styles of TDD mainly due to the fact that TDD principles are open to interpretation.
I'm not here to say anyone is wrong, but I would like to share my techniques with you and explain how they have served me well. Bear in mind that I have been programming for 36 years; making my programming habits very well evolved.
Code reuse is over rated. Reuse code too much and you'll end up with bad abstraction and it will become very difficult to fix or change something without it affecting something else. The obvious advantage being less code to manage.
Repeating too much code leads to code management problems and oversized code bases. However it does have the advantage of good separation of concerns (the ability to tweak, change and fix things without affecting other parts of the app).
Don't repeat/refactor too much, don't reuse too much. Code needs to be maintainable. It’s important to understand and respect the balance between code reuse and abstraction/separation of concerns.
When deciding whether to reuse code I base the decision on: .... Will the nature of this code change in context throughout the app codebase? If the answer is no, then I reuse it. If the answer is yes or I'm not sure, I repeat/refactor it. I will however revise my codebases from time to time and see if any of my repeated code can be merged without compromising separation of concerns/abstraction.
As far as my basic programming habits are concerned, I like to write the conditions (if then else switch case etc) first; test them, then fill the conditions with the code and test again. Keep in mind there's no rule that you have to do this in a unit test. I refer to this as the low level stuff.
Once my low level stuff is done, I'll either reuse the code or refactor it into another part of the app, but not after testing it very thoroughly. Problem with repeating/refactoring badly tested code is that, if it’s broken, you have to fix it in multiple places.
BDD To me is a natural follow on from TDD. Once my code base is well tested I can easily tweak behaviours by moving entire blocks of code around. Cool thing is about my programming habits is that sometimes I move code around and discover useful behaviours that I didn’t even intend. It can sometimes even be useful for rebranding stuff to seem like a completely different code base.
To this end my code bases tend to start out a bit slow and pick up momentum because as I advance toward the end of development I have more and more code to refactor from or reuse.
The advantages for me in the way that I code is that, I am able to take on very high levels of complexity as this is promoted by good separation of concerns. It’s also awesome for writing highly optimised code. However the well optimised code tends to be a bit bloated, but to my knowledge there is no way to write optimized code without a bit of bloating. If the app doesn't need high processor efficiency, there's nothing stopping me from de-bloating my code. I'm of the opinion that server side code should be optimised and most client side code normally doesn't require it.
Going back to the topic of testing frameworks, I use them to just save a bit of compiler time.
As far as following story boards is concerned, that comes naturally to me without actually considering it. I've noticed most devs develop in the natural order of story boards even when they are not available.
As a general separation of concerns strategy, in most apps I separate concerns based on UI forms. For example I’ll reuse code within a form and repeat/refactor across forms. This is only a generalistic rule. There are times when I have to think outside the box. Sometimes repeating code can serve well for making code processor efficient.
As a little addendum to my TDD habits; I do optimizations and fault tolerance last. I will try to avoid using try catch blocks as much as possible and write my code in such a way as to not need them. For example rather than catch a null, I will check for null, or rather than catch an index out of bounds, I will scrutinise my code so that it never happens. I find that error trapping too early in app development, leads to semantic errors (behavioural errors that don't crash the app). Semantic errors can be very hard to trace or even notice.
Well that’s my 10 cents. Hope it helps.
Test Driven Development ?
So, this means you should start with writing a test first.
Write a test which contains the code like 'how you want to use your class'. This class or method that you are going to test with this test, is not even there yet.
For instance, you could write a test first like this:
[Test]
public void CanReadDataFromSocket()
{
SocketReader r = new SocketReader( ... ); // create a socketreader instance which takes a socket or a mock-socket in its constructor
byte[] data = r.Read();
Assert.IsTrue (data.Length > 0);
}
For instance; I'm just making up an example here.
Next, once you're able to read data from a socket, you can start thinking on how you'll parse it, and write a test in where you use the 'Parser' class which takes the data that you've read, and outputs an instance of your data class.
etc...
Knowing where to start writing tests and when to stop writing tests while using TDD, is a common problem when starting out.
I have found that it can sometimes help to write an integration test first. Doing so will help create some of the common objects you will be using. It will also allow you to focus your thoughts and tests, since you will need to start writing tests to make the integration test pass.
When I was starting with TDD, I read these 3 rules by Uncle Bob that really helped me out:
You are not allowed to write any production code unless it is to
make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In a shorter version it would be:
Write only enough of a unit test to fail.
Write only enough production code to make the failing unit test pass.
as you can see, this is very simple.

Resources