page object model: why not include assertions in page methods? - ruby

First-time poster. I've been working in UI automation for many years, but was only recently introduced to/instructed to work with the Page Object Model. Most of it is common sense and includes techniques I've been using already, but there's a particular fine point which I haven't been able to justify in my own mind, despite searching extensively for a well-reasoned explanation. I'm hoping someone here might enlighten me, as this question has caused some consternation as I try to integrate the POM with my own best practices.
From http://code.google.com/p/selenium/wiki/PageObjects:
The code presented above shows an important point: the tests, not the
PageObjects, should be responsible for making assertions about the
state of a page.... Of course, as with every guideline there are
exceptions...
From http://seleniumhq.org/docs/06_test_design_considerations.html#chapter06-reference:
There is a lot of flexibility in how the page objects may be designed,
but there are a few basic rules for getting the desired
maintainability of your test code. Page objects themselves should
never be make verifications or assertions. This is part of your test
and should always be within the test’s code, never in an page object.
The page object will contain the representation of the page, and the
services the page provides via methods but no code related to what is
being tested should be within the page object.
There is one, single, verification which can, and should, be within
the page object and that is to verify that the page, and possibly
critical elements on the page, were loaded correctly. This
verification should be done while instantiating the page object.
Both of these "guidelines" allow for potential exceptions, but I couldn't disagree more with the basic premise. I'm accustomed to doing a considerable amount of verification within "page methods", and I think the presence of verification there is a powerful technique for finding issues in a variety of contexts (i.e., verification occurs every time the method is called) rather than only occurring in the limited context of particular tests.
For example, let's imagine that when you login to your AUT, some text appears that says "logged in as USER". It's appropriate to have a single test validate this specifically, but why wouldn't you want to verify it every time login is called? This artifact is not directly related to whether the page "loaded correctly" or not, and it's not related to "what is being tested" in general, so according to the POM guidelines above, it clearly SHOULDN'T be in a page method... but it seems to me that it clearly SHOULD be there, to maximize the power of automation by verifying important artifacts as often as possible, with as little forethought as possible. Putting verification code in page methods multiplies the power of automation by allowing you to get a lot of verification "for free", without having to worry about it in your tests, and such frequent verification in different contexts often finds issues which you would NOT find if the verification were limited to, say, a single test for that artifact.
In other words, I tend to distinguish between test-specific verification and "general" verification, and I think it's perfectly appropriate/desirable for the latter to be included - extensively - in page methods. This promotes thinner tests and thicker page objects, which generally increases test maintainability by reusing more code - despite the opposite contention in these guidelines. Am I missing the point? What's the real rationale for NOT wanting verification in page methods? Is the situation I've described actually one of the 'exceptions' described in these guidelines, and therefore actually NOT inconsistent with the POM? Thanks in advance for your thoughts. -jn-

As a guideline, assertions should be done in tests and not in page objects. Of course, there are times when this isn't a pragmatic approach, but those times are infrequent enough for the above guideline to be right. Here are the reasons why I dislike having assertions in page objects:
It is quite frustrating to read a test that just calls verify methods where assertions are buried elsewhere in page objects. Where possible, it should be obvious what a test is asserting; this is best achieved when assertions are directly in a test. By hiding the assertions somewhere outside of a test, the intent of the test is not so clear.
Assertions in browser tests can be expensive - they can really slow your tests down. When you have hundreds or thousands of tests, minutes/hours can be added to your test execution time; this is A Bad Thing. If you move the assertions to just the tests that care about those particular assertions you'll find that you'll have much quicker tests and you will still catch the relevant defects. The question included the following:
Putting verification code in page methods multiplies the power of automation by allow you to get a lot of verification "for free"
Well, "Freedom Isn't Free" :) What you're actually multiplying is your test execution time.
Having assertions all over the place violates another good guideline; "One Assertion Per Test" ( http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html ). I don't stick religiously to it, but I try to follow the principle. Where possible, a test should be interested in one thing only.
The value of tests is reduced because one bug will cause loads of tests to fail thus preventing them from testing what they should be testing.
For example, let's imagine that when you login to your AUT, some text appears that says "logged in as USER". It's appropriate to have a single test validate this specifically, but why wouldn't you want to verify it every time login is called?
If you have the assertion in the page object class and the expected text changes, all tests that log in will fail. If instead the assertion is in the test then only one test will fail - the one that specifically tests for the correct message - leaving all the other tests to continue running to find other bugs. You don't need 5,000 tests to tell you that the login message is wrong; 1 test will do ;)
Having a class do more than one thing violates 'S' in SOLID, ie: 'Single Responsibility Principle' (SRP). A class should be responsible for one thing, and one thing only. In this instance a page-object class should be responsible for modelling a page (or section thereof) and nothing more. If it does any more than that (eg: including assertions) then you're violating SRP.

I too have struggled at times with this recommendation. I believe the reason behind this guideline is to keep your page objects reusable, and putting asserts inside your page objects could possibly limit their ability to be reused by a large number of unrelated tests. That said, I have put certain verification methods on my page objects like testing the caption for a header - in my experience, that is a better way to encapsulate test logic for elements of a page that don't change.
Another note - I have seen MVC applications that have domain models reused as page objects. When done correctly, this can significantly reduce redundant code in your testing library. With this pattern, the view models have no reference to a testing framework, so obviously, you could not put any asserts in them.

Your page object shouldn't perform an assertion because then the page object has to know about your test framework (unless you're using built-in language assertions). But your page needs to know it's state to locate elements and perform actions.
The key is in the statement "Of course, as with every guideline there are exceptions..."
Your page should throw exceptions, not perform assertions. That way your test can catch the assertion and bail or act accordingly. For instance.
page = ProfilePage.open
try
page.ChangePassword(old, new)
catch notLoggedIn
page.Login(user, pass)
assert page.contains "your password has been updated"
In this limited example you'd have to check again (and again) so it might not be the best way, but you get the idea. You could also just check state (twice)
if page.hasLoginDialog
page.Login
if page.hasLoginDialog //(again!)
assert.fail("can't login")
You could also just check that you have a profile page
try
page = site.OpenProfilePage
catch notOnProfilePage
or has the elements you need
try
profilepage.changePassword(old,new)
catch elementNotFound
or without throwing an exception
page = site.OpenProfilePage
if ! page instanceof ProfilePage
or with complex checking
assert page.looksLikeAProfilePage
It's not how you do it that matters. You want to keep logic in your tests to a minimum but you don't want your page objects to be tied to your test framework -- after all, you might use the same objects for scraping or data generation -- or with a different test framework that has it's own assertions.
If you feel a need you can push your assertions out of your test case into test helper methods.
page = site.GoToProfilePage
validate.looksLikeProfilePage(page)
which a great opportuntity for a mixin if your language supports them, so you can have your clean page objects -- and mixin your sanity checks.

This perplexes me when I see same assertion could be used across multiple test methods. For example, writing assertion specific method -
public PaymentPage verifyOrderAmount(BigDecimal orderAmount) {
Assertion.assertEquals(lblOrderAmount.getText(), orderAmount, "Order Amount is wrong on Payment details page");
return this;
}
Now I can reuse it in all tests I need. Instead of repeating same assertion statement in multiple tests dealing with multiple scenarios. Needless to say I can chain multiple assertions in a method depending on test i.e -
.verifyOrderAmount(itemPrice)
.verifyBankAmount(discountedItemPrice)
.verifyCouponCode(flatDiscountCode.getCouponCode())
When page object is supposed to represent the services offered by page then, is not assertion point also a service provided by Page?

#Matt reusing domain models in page object might save you time but isn't that a Test Smell, test logic be well clear of domain model (depending on what you are trying to achieve).
Back to the Original Question, If you really must do assertions in the Page Object, why not use selenium loadablecomponent<> where you can use the isLoaded() method or include your custom assertion in the loadablecomponent<> class. This will keep your page object free of assertions.But you get to do assertions in the loadable component. See link below...
https://github.com/SeleniumHQ/selenium/wiki/LoadableComponent
_The Dreamer

I couldn't agree more with the author.
Adding assertions in test methods helps you to 'fail early'. By assertions, I mean checking if a certain page is loaded after clicking button, etc. (the so called general assertions).
I really don't believe it increases the execution time so much. UI automation is slow by default, adding some few-millisecond checks do not make that much of a difference, but could ease your troubleshooting, report an early failure and make your code more re-usable.
However, it also depends on the type of UI tests. For instance, if you are implementing end-to-end tests with mostly positive paths, it makes sense to make a check inside the test method that clicking a button actually results in opening a page. However, if you are writing bunch of negative scenarios, that's not always the case.

Related

PHPUnit - Creating tests after development

I've watched and read a handful of tutorials on PHPUnit and Test Driven Development and have recently begun working with Laravel which extends the PHPUnit Framework with it's TestCase class. All of these things make sense to me, as far as, creating tests as you develop. And I find Laravel's extensions particularly intuitive (especially in regards to testing Controller routes)
However, I've recently been tasked with creating unit tests for a sizable app that's near completion. The app is built in Codeigniter, and it was not built with any tests
I find that I'm not entirely sure where to begin, or what steps to take in order to determine the tests I should create.
Should I be looking to test each controller method? Or do I need to break it down more than that? Admittedly, many of these controller methods are doing more than one task.
It is really difficult to write tests for existing project. I will suggest you to first start with writing tests for classes which are not dependent on other classes. Then you can continue to write tests to classes which coupled with classes for which you wrote tests. You will increase your test coverage step by step by repeating this process.
Also don't forget that some times you will need to refactor your code to make it testable. You should improve design of code for example if your controller methods doing more than one task you should divide this method to sub methods and test each of these methods independently.
I also will suggest you to look at this question
You are in a bit of a tight spot, but here is what I would do in your situation. You need to refactor (ie. change) the existing code so that you end up with three types of functions.
The first type are those that deal with the outside world. By this I mean anything that talks to I/O, or your framework or your operating system or even libraries or code from stable modules. Basically everything that has a dependency on code that you can not, or may not change.
The second group of functions are where you transform or create data structures. The only thing they should know about are the data structures that they receive as parameters and the only way they communicate back is by changing those structures or by creating and populating a new structure.
The third group consists of co-ordinating functions which make the calls to the outside world functions, get their returned data structures and pass those structures to the transforming functions.
Your testing strategy is then as follows: the second group can be tested by creating fake data structures, passing them in and checking that the transforms were done correctly. The third group of co-ordinating functions can be tested by dependency injection and mocking to see that they call the outside world and transform functions correctly. Finally the last group of functions should not be tested. You follow the maxim - "make it so simple that their is obviously nothing wrong". See if you can keep it to a single line of code. If you go over four lines of code for these then you are probably doing it wrong.
If you are completely new to TDD I do however strongly suggest that you first get used to doing it on green field projects/modules. I made a couple of false starts on unit testing because I tried to bolt it onto projects afterwards. TDD is really a joy when you finally grok it so it would not be good if you get discouraged early on because of a too steep learning curve.

How to document undefined behaviour in the Scrum/agile/TDD process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.
When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.
In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?
As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.
I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.
By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:
For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.
Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.
For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.
For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.
If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.
You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.
In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.
More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.
For example. suppose we're developing Stack Overflow and we're doing this story:
As a user I want to search questions so that I can find them
The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story
As a user I want to specifically search closed questions so that I can find more results
We develop this story, and find more edge cases, which are then more stories, etc.
In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way
Having undocumented functionality in your product really is a bad practice.
If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.
I'd suggest that if an undocumented feature is found:
Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.
You also had a question regarding the tracking of the outcome of these edge-case scenarios:
What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.
As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.
However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

Test Driven Development - What exactly is the test?

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

TDD ...how?

I'm about to start out my first TDD (test-driven development) program, and I (naturally) have a TDD mental block..so I was wondering if someone could help guide me on where I should start a bit.
I'm creating a function that will read binary data from socket and parses its data into a class object.
As far as I see, there are 3 parts:
1) Logic to parse data
2) socket class
3) class object
What are the steps that I should take so that I could incrementally TDD? I definitely plan to first write the test before even implementing the function.
The issue in TDD is "design for testability"
First, you must have an interface against which to write tests.
To get there, you must have a rough idea of what your testable units are.
Some class, which is built by a function.
Some function, which reads from a socket and emits a class.
Second, given this rough interface, you formalize it into actual non-working class and function definitions.
Third, you start to write your tests -- knowing they'll compile but fail.
Part-way through this, you may start head-scratching about your function. How do you set up a socket for your function? That's a pain in the neck.
However, the interface you roughed out above isn't the law, it's just a good idea. What if your function took an array of bytes and created a class object? This is much, much easier to test.
So, revisit the steps, change the interface, write the non-working class and function, now write the tests.
Now you can fill in the class and the function until all your tests pass.
When you're done with this bit of testing, all you have to do is hook in a real socket. Do you trust the socket libraries? (Hint: you should) Not much to test here. If you don't trust the socket libraries, now you've got to provide a source for the data that you can run in a controlled fashion. That's a large pain.
Your split sounds reasonable. I would consider the two dependencies to be the input and output. Can you make them less dependent on concrete production code? For instance, can you make it read from a general stream of data instead of a socket? That would make it easier to pass in test data.
The creation of the return value could be harder to mock out, and may not be a problem anyway - is the logic used for the actual population of the resulting object reasonably straightforward (after the parsing)? For instance, is it basically just setting trivial properties? If so, I wouldn't bother trying to introduce a factory etc there - just feed in some test data and check the results.
First, start thinking "the testS", plural, rather than "the test", singular. You should expect to write more than one.
Second, if you have mental block, consider starting with a simpler challenge. Lower the difficulty until it's real easy to do, then move on to more substantial work.
For instance, assume you already have a byte array with the binary data, so you don't even need to think about sockets. All you need to write is something that takes in a byte[] and return an instance of your object. Can you write a test for that ?
If you still have mental block, lower it yet another notch. Assume your byte array is only going to contain default values anyway. So you don't even have to worry about the parsing, just about being able to return an instance of your object that has all values set to the defaults. Can you write a test for that ?
I imagine something like:
public void testFooReaderCanParseDefaultFoo() {
FooReader fr = new FooReader();
Foo myFoo = fr.buildFoo();
assertEquals(0, myFoo.bar());
}
That's rock bottom, right ? You're only testing the Foo constructor. But you can then move up to the next level:
public void testFooReaderGivenBytesBuildsFoo() {
FooReader fr = new FooReader();
byte[] fooData = {1};
fr.consumeBytes(fooData);
Foo myFoo = fr.buildFoo();
assertEquals(1, myFoo.bar());
}
And so on...
'The best testing framework is the application itself'
I believe that a common misconception amongst developers is, they mistakenly make a strong association between testing frameworks and TDD principles. I would advise re-reading the official docs on TDD; bearing in mind that, there is no real relationship between testing frameworks and TDD. After all, TDD is a paradigm not a framework.
Upon reading the wiki on TDD (https://en.wikipedia.org/wiki/Test-driven_development), I've come to realise that to an extent things are a little bit open to interpretation.
There are various personal styles of TDD mainly due to the fact that TDD principles are open to interpretation.
I'm not here to say anyone is wrong, but I would like to share my techniques with you and explain how they have served me well. Bear in mind that I have been programming for 36 years; making my programming habits very well evolved.
Code reuse is over rated. Reuse code too much and you'll end up with bad abstraction and it will become very difficult to fix or change something without it affecting something else. The obvious advantage being less code to manage.
Repeating too much code leads to code management problems and oversized code bases. However it does have the advantage of good separation of concerns (the ability to tweak, change and fix things without affecting other parts of the app).
Don't repeat/refactor too much, don't reuse too much. Code needs to be maintainable. It’s important to understand and respect the balance between code reuse and abstraction/separation of concerns.
When deciding whether to reuse code I base the decision on: .... Will the nature of this code change in context throughout the app codebase? If the answer is no, then I reuse it. If the answer is yes or I'm not sure, I repeat/refactor it. I will however revise my codebases from time to time and see if any of my repeated code can be merged without compromising separation of concerns/abstraction.
As far as my basic programming habits are concerned, I like to write the conditions (if then else switch case etc) first; test them, then fill the conditions with the code and test again. Keep in mind there's no rule that you have to do this in a unit test. I refer to this as the low level stuff.
Once my low level stuff is done, I'll either reuse the code or refactor it into another part of the app, but not after testing it very thoroughly. Problem with repeating/refactoring badly tested code is that, if it’s broken, you have to fix it in multiple places.
BDD To me is a natural follow on from TDD. Once my code base is well tested I can easily tweak behaviours by moving entire blocks of code around. Cool thing is about my programming habits is that sometimes I move code around and discover useful behaviours that I didn’t even intend. It can sometimes even be useful for rebranding stuff to seem like a completely different code base.
To this end my code bases tend to start out a bit slow and pick up momentum because as I advance toward the end of development I have more and more code to refactor from or reuse.
The advantages for me in the way that I code is that, I am able to take on very high levels of complexity as this is promoted by good separation of concerns. It’s also awesome for writing highly optimised code. However the well optimised code tends to be a bit bloated, but to my knowledge there is no way to write optimized code without a bit of bloating. If the app doesn't need high processor efficiency, there's nothing stopping me from de-bloating my code. I'm of the opinion that server side code should be optimised and most client side code normally doesn't require it.
Going back to the topic of testing frameworks, I use them to just save a bit of compiler time.
As far as following story boards is concerned, that comes naturally to me without actually considering it. I've noticed most devs develop in the natural order of story boards even when they are not available.
As a general separation of concerns strategy, in most apps I separate concerns based on UI forms. For example I’ll reuse code within a form and repeat/refactor across forms. This is only a generalistic rule. There are times when I have to think outside the box. Sometimes repeating code can serve well for making code processor efficient.
As a little addendum to my TDD habits; I do optimizations and fault tolerance last. I will try to avoid using try catch blocks as much as possible and write my code in such a way as to not need them. For example rather than catch a null, I will check for null, or rather than catch an index out of bounds, I will scrutinise my code so that it never happens. I find that error trapping too early in app development, leads to semantic errors (behavioural errors that don't crash the app). Semantic errors can be very hard to trace or even notice.
Well that’s my 10 cents. Hope it helps.
Test Driven Development ?
So, this means you should start with writing a test first.
Write a test which contains the code like 'how you want to use your class'. This class or method that you are going to test with this test, is not even there yet.
For instance, you could write a test first like this:
[Test]
public void CanReadDataFromSocket()
{
SocketReader r = new SocketReader( ... ); // create a socketreader instance which takes a socket or a mock-socket in its constructor
byte[] data = r.Read();
Assert.IsTrue (data.Length > 0);
}
For instance; I'm just making up an example here.
Next, once you're able to read data from a socket, you can start thinking on how you'll parse it, and write a test in where you use the 'Parser' class which takes the data that you've read, and outputs an instance of your data class.
etc...
Knowing where to start writing tests and when to stop writing tests while using TDD, is a common problem when starting out.
I have found that it can sometimes help to write an integration test first. Doing so will help create some of the common objects you will be using. It will also allow you to focus your thoughts and tests, since you will need to start writing tests to make the integration test pass.
When I was starting with TDD, I read these 3 rules by Uncle Bob that really helped me out:
You are not allowed to write any production code unless it is to
make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In a shorter version it would be:
Write only enough of a unit test to fail.
Write only enough production code to make the failing unit test pass.
as you can see, this is very simple.

Resources