I have a question about test driven development, which is something I am trying to learn.
I have been reviewing a project delivered by a team of my company to see what kind of stuff they include in their tests.
Basically, for every project in the solution, there is a corresponding test project. This includes the projects in the data layer. I have found that the tests in that project are actually hitting the database and making assertions based on retrieved data.
In fact, the tests of the classes in the Services layer are also hitting the database.
Is this normal in TDD?
If not, and if those classes being tested did nothing other than retrieve data, then what would be the best way of testing them?
Dare I say, should they be tested at all? If TDD helps drive out the design, arguably they should.
What do the TDD kings out there say?
Is hitting the database from most tests normal in TDD? Yes, unfortunately, it's more normal than it ought to be. That doesn't mean it's right, though.
There are various styles of TDD:
Outside-In TDD, popularized by GOOS.
Bottom-Up TDD, which is the 'older' style, where you develop 'leaf' components first.
When you use the Outside-In approach, you'd typically start with a few coarse-grained tests to flesh out the behaviour of the system. These might very well hit the a database. That's OK.
However, you can't properly test for basic correctness from the boundary of a complex application alone, because the combinatorial explosion of required test cases is prohibitive. You would literally have to write tens of thousands, or hundreds of thousands of test cases.
Therefore, even with the Outside-In approach, you should write most of the tests at the level of a unit. These tests should not hit the database.
If you're using the Bottom-Up style, you shouldn't hit the database in most cases.
In short, TDD means Test-Driven Development, not necessarily Unit Test-Driven Development, so it may be fine with a few tests hitting a database. However, such tests tend to be slow and fragile, so there should be only a few of them. The concept of the Test Pyramid nicely explains this.
If you want to learn more, you may want to watch my Pluralsight courses
Outside-In Test-Driven Development
Advanced Unit Testing
Related
Chapter about TDD from Martin's "Clean Code" caught my imagination.
However.
These days I am mostly expanding or fixing large existing apps.
TDD, on the other hand, seems to only be working only for writing from scratch.
Talking about these large existing apps:
1. they were not writted with TDD (of course).
2. I cannot rewrite them.
3. writing comprehensive TDD-style tests for them is out of question in the timeframe.
I have not seen any mention of TDD "bootstrap" into large monolite existing app.
The problem is that most classes of these apps, in principle, work only inside the app.
They are not separable. They are not generic. Just to fire them up, you need half of the whole app, at least. Everything is connected to everything.
So, where is the bootstrap ?
Or there is alternative technique with results of TDD
that'd work for expanding the existing apps that were not developed with TDD ?
The bootstrap is to isolate the area you're working on and add tests for behavior you want to preserve and behavior you want to add. The hard part of course is making it possible, as untested code tends to be tangled together in ways that make it difficult to isolate an area of code to make it testable.
Buy Working Effectively with Legacy Code, which gives plenty of guidance on how to do exactly what you're aiming for.
You might also want to look at the answers to this related question, Adding unit tests to legacy code.
Start small. Grab a section of code that can reasonably be extracted and made into a testable class, and do it. If the application is riddled with so many hard dependencies and terrifying spaghetti logic that you can't possibly refactor without fear of breaking something, start by making a bunch of integration tests, just so you can confirm proper behavior before/after you start messing around with it.
Your existing application sounds as if it suffers from tight coupling and a large amount of technical debt. In cases such as these you can spend a LOT of time trying to write comprehensive unit tests where time may be better spent doing major refactoring, specifically promoting loose coupling.
In other cases investing time and effort into unit testing with Mocking frameworks can be beneficial as it helps decouple the application for purposes of testing, making it possible to test single components. Dependency Injection techniques can be used in conjunction with mocking to help make this happen as well.
I a bit familiar with rspec [Ruby] and specs [Scala]. Yesturday I passed a tutor for Cucumber. What I disliked about Cucumber is that additionally to describing scenarios (like you would do with spec- or xUnit-style testing) you have to implement extra layer of indirection: translating scenario steps into ruby expressions. For me creating unnecessary (?) extra layers of indirection feels like "heavy-weight" J2EE-way, not like "light-weight" ruby way. Is understandability by "domain experts" the only advantage of Cucumber? Or is there some non-obvious (technical?) advantages for developer/tester as well?
BDD, from a practical standpoint, is highly synonymous with TDD. Rspec is a BDD test framework, as well as Cucumber.
That stakeholders can read and understand Cucumber acceptance specifications is certainly a key advantage, but this fact alone really doesn't get at the real benefit of Cucumber. Your features and scenarios ought to grow somewhat in specificity as the work being done for them moves through the value stream of your team's development cycle.
Some teams may have an analyst scoping out work at the beginning of the cycle. Sometimes this analyst writes gherkin acceptance specifications, but whoever writes the first draft, you would expect them to be fairly coarse grained. They may not cover every un-happy path.
As the developers take up the work, they often discover edge cases and missing scenarios. At this point they can touch base with the analyst, and the results of such conversations should be written into the cucumber features.
In my experience the testers have cultivated an even more critical eye, and thus it is not uncommon to see them add even more scenarios and features. The testers may also uncover defects, which should be added to the cukes to protect us from regression.
The point is that, in addition to providing executable documentation for our code, Cucumber also provides a repository for the state of the team's conversations vis a vis the features being developed.
There is certainly extra overhead to all of this. However, it's worth considering how much overhead is already in your team's process, which Cucumber might serve to streamline. I find that Cucumber helps reduce the amount of thrash that happens in communication about features both within and outside of the team room.
I should also mention that cucumber is intended as full-stack acceptance testing, and therefore should be less fine-grained, relative to your unit tests. And cucumber is not a good substitute for unit and integration tests. I also would never recommend using cucumber to verify aesthetic aspects of your app's UI. Just use it to validate the actions which a user might take when using your app.
Cucumber is designed to help engage business stakeholders in refining developers' and testers' understanding of the system by collaborating to create scenarios that everyone can understand.
The act of engaging business stakeholders pays off because everyone gets a better understanding, they start sharing the same language and carrying that language into the code (see Domain Driven Design's "Ubiquitous Language"), which can lead to better estimates, appreciation of scope, conversations around options for achieving the same goal, etc. etc.
There are certainly other ways of achieving the goals. For instance, on our C# project we talk through the scenarios, write them on a wiki then implement them using a little custom Domain Specific Language rather like this one. The same thing could be done in Ruby.
BDD is the process of learning the places where we thought we knew what we were doing, but it turned out we were wrong - the discovery of ignorance. With Feature Injection and unit-level examples, this happens at multiple levels of granularity, all the way up to the project vision itself. It tends to pay back for itself, but you don't need a BDD framework to do BDD.
The conversations in BDD are the important bits, not the tool you use to capture them (I helped write JBehave and still believe this is true). Automating regression tests is also important to cut down the manual effort which rises as the codebase grows, and Cucumber, DSLs and other BDD tools give you this as a very nice by-product which also help you trace, and drive out, the shared understanding.
Edit: I should mention that the reuse of steps is also important, but it doesn't make much difference whether you use a BDD framework or a DSL for that. It does make the difference between a DSL and just procedurally mimicking every user interaction.
It depends what you want to achieve.
Cucumber adds overhead, can be tricky to get used to, takes time to master.
If you want your domain experts to be able to read your tests, you should definitely give it a try.
On the other hand, if developers are the only people reading your tests, you can probably stick to rspec/unit-test/etc. and write your integration tests in those frameworks. However you might achieve more readable high level documentation by using cucumber.
See for example rspec 2 core features' descriptions in cucumber.
My question is same as the title. I have not done Test driven development yet,I have performed unit testing and I am aware of testing in general but I hear it is much advantageous to follow TDD?. So should I opt for it everytime or are there some preconditions... i just need to know some basic or important requirements when i should opt for TDD. Sorry if this question is too trivial.
I would say whenever you are coding for a project. By this I mean where you are hoping to produce code that will be used by people. This is basically all code apart from research where you are learning and discovering new techniques.
Even if you think the project is just a small one often things spiral up out of control without care. You wil be glad for the tests when you find yourself having to refactor a big sprawl of code.
Also note that tdd isn't just about testing. It is a methodology of development that encourages you to create clean and solid designs.
As you are starting out tdd everything. Once you have more experience then perhaps you can back off and determine when to not tdd.
IMHO if you are not comfortable with TDD, trying to apply it in projects where you will need to interact/use legacy code it's much more complex than applying it in a project from scratch. In fact, there is a whole book about this.
HTH
My advice would be to use unit testing as often as possible.
Caveat: In my experience TDD works best when working with technologies you already have some experience with. It's often hard to write test assertions if you do not know exactly what the desired result looks like (For example, try writing the test for an ASP.NET MVC action method if you never wrote an action method in your entire life). In those cases you're probably better off writing unit tests after implementing the code.
If I am developing something without a user interface, I always use TDD these days. After all, you have to test the software. You can either do the extra work and do TDD, or you can do the extra work and cludge together a user client just for testing. The former tests more completely and in a more repeatable fashion.
Doing TDD against user interface code, on the other hand, hasn't really delivered much value for me. For various business reasons I'm restricted to Visual Studio out of the box for my work, and "recording" tests with VS is a huge time sink, especially when you have to re-record them if you change the UI. I do TDD for the business logic behind the UI, but not the UI itself.
I've got about 100 unit tests and with a coverage of %20, which I'm trying to increase the coverage and also this is a project in development so keep adding new tests.
Currently running my tests after every build is not feasible they takes about 2 moments.
Test Includes:
File read from the test folders (data-driven style to simulate some HTTP stuff)
Doing actual HTTP requests to a local web-server (this is a huge pain to mock, so I won't)
Not all of them unit-tests but there are also quite complicated Multithreaded classes which need to be tested and I do test the overall behaviour of the test. Which can be considered as Functional Testing but need to be run every time as well.
Most of the functionality requires reading HTTP, doing TCP etc. I can't change them because that's the whole idea of project if I change these tests it will be pointless to test stuff.
Also I don't think I have the fastest tools to run unit tests. My current setup uses VS TS with Gallio and nUnit as framework. I think VS TS + Gallio is a bit slower than others as well.
What would you recommend me to fix this problem? I want to run unit-tests after every little bit changes btu currently this problem is interrupting my flow.
Further Clarification Edit:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
Further Clarification:
Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)
Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.
And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.
These don't sound like unit tests to me, but more like functional tests. That's fine, automating functional testing is good, but it's pretty common for functional tests to be slow. They're testing the whole system (or large pieces of it).
Unit tests tend to be fast because they're testing one thing in isolation from everything else. If you can't test things in isolation from everything else, you should consider that a warning sign that you code is too tightly coupled.
Can you tell which tests you have which are unit tests (testing 1 thing only) vs. functional tests (testing 2 or more things at the same time)? Which ones are fast and which ones are slow?
You could split your tests into two groups, one for short tests and one for long-running tests, and run the long-running tests less frequently while running the short tests after every change. Other than that, mocking the responses from the webserver and other requests your application makes would lead to a shorter test-run.
I would recommend a combined approach to your problem:
Frequently run a subset of the tests that are close to the code you make changes to (for example tests from the same package, module or similar). Less frequently run tests that are farther removed from the code you are currently working on.
Split your suite in at least two: fast running and slow running tests. Run the fast running tests more often.
Consider having some of the less likely to fail tests only be executed by an automated continues integration server.
Learn techniques to improve the performance of your tests. Most importantly by replacing access to slow system resources by faster fakes. For example, use in memory streams instead of files. Stub/mock the http access. etc.
Learn how to use low risk dependency breaking techniques, like those listed in the (very highly recommended) book "Working Effectively With Legacy Code". These allow you to effectively make your code more testable without applying high risk refactorings (often by temporarily making the actual design worse, like breaking encapsulation, until you can refactor to a better design with the safety net of tests).
One of the most important things I learned from the book mentioned above: there is no magic, working with legacy code is pain, and always will be pain. All you can do is accept that fact, and do your best to slowly work your way out of the mess.
First, those are not unit tests.
There isn't much of a point running functional tests like that after every small change. After a sizable change you will want to run your functional tests.
Second,don't be afraid to mock the Http part of the application. If you really want to unit test the application its a MUST. If your not willing to do that, you are going to waste a lot more time trying to test your actual logic, waiting for HTTP requests to come back and trying to set up the data.
I would keep your integration level tests, but strive to create real unit tests. This will solve your speed problems. Real unit tests do not have DB interaction, or HTTP interaction.
I always use a category for "LongTest". Those test are executed every night and not during the day. This way you can cut your waiting time by a lot. Try it : category your unit testing.
It sounds like you may need to manage expectations amongst the development team as well.
I assume that people are doing several builds per day and are epxected to run tests after each build. You might we be well served to switch your testing schedule to run a build with tests during lunch and then another over night.
I agree with Brad that these sound like functional tests. If you can pull the code apart that would be great, but until then i'd switch to less frequent testing.
Is the real benefit in TDD the actual testing of the application, or the benefits that writing a testable application brings to the table? I ask because I feel too often the conversation revolves so much around testing, and not the total benefits package.
TDD helps you design your software. The tests becomes the design. By writing the test first you think about your code from a consumer perspective, making a more user friendly and more compact software design.
Also, by applying TDD you typically end up writing your code in a way where you can supply test mocks and stubs. This leads to less coupled software, making it easier to change and maintain over time.
So I guess allot of the talk around TDD is about testing, but by doing that other big benefits follow, such as quality (coverage), flexibility (decoupling), better design (think as the consumer of the API).
The real improvement is that it is a good way to force you to really think through the design and implementation. Then, once you've prepared the tests and written the code, solutions to unforeseen problems appear more easily.
Something that usually happens to me that is a good analogy: When I'm going to post a question to a forum or IRC channel, I like to have the problems well written and fully described, many times the process of preparing a well written and complete description of the problem magically makes the solution appear.
The real benefit of TDD is supposed to be that it allows you to modify/refactor/enhance your application without worrying about whether you've broken existing functionality. The fact that writing unit tests tends to result in loosely coupled code and better architecture isn't necessarily the point of TDD, but I think it's hard to have one without the other.
You can't really experience the benefit of TDD unless you have unit tests with good coverage. In order to do that, you're going to have to write testable code. That's why the two are often used in conjunction or in place of one another.
Automated testing is such a time saver and confidence booster when you are developing a product that you'll ship multiple versions of. With automated tests, you know that you haven't broken anything between versions. This especially helpful when your product is something that people can write add-ons for - you don't want to break their add-ons between versions.
With TDD, you get a good suite of tests as you develop. Without TDD writing those tests is much more difficult.
Michael Feathers has an insightful blog post about this titled The Flawed Theory Behind Unit Testing. Seriously, go read it. The punch line is
All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code.
but you should read the full post for the context.
Automated testing keeps humans from doing a machine's job.
Test-driven development maximizes the amount of automated testing.
Beyond a certain point, of course, a human is still required. You reach diminishing returns when you try to apply TDD beyond that point.