Does BDD replace TDD, or are both used together? I've been reading that BDD should only test for things that the user can see. If that is the case, does that mean one would need to use TDD for service methods that the user wouldn't see?
BDD is a replacement for both TDD and ATDD (and derived from them).
The first ever tool for BDD, JBehave, actually started as a replacement for the unit-testing framework JUnit. The intention was that it would allow you to describe the behaviour of your (not-yet-written) system and provide examples, without using the word "test", since really at that stage it's analysis, not testing. That word, "test", was causing all kinds of confusion!
The difference is that example and behaviour and specification are all problem-space language, whereas the word test is solution-space. It tends to make people think they understand the problem, and are testing a solution, which is a problem, if they really don't!
It turns out that avoiding the word test helps people to explore the problem more fully. You can read about this in Dan North's original article.
While Dan was exploring BDD in 2003/4, he explained it to Chris Matts, who was working at the time as a Business Analyst. Chris said, "But that's just like analysis!" And so they started exploring how Chris did analysis on whole systems, and how examples of that worked, and Chris realised that it wasn't just about behaviour and outcomes, it was about behaviour and outcomes in a context. That formed the whole "Given, When, Then" syntax that we now know today (it wasn't called Gherkin back then because Cucumber didn't exist).
So Dan started writing a scenario-running framework on top of the unit-"testing" tool, and then ported it to Ruby as RBehave, which then got turned into plain text and became Cucumber. At that point, BDD became a replacement for ATDD (which was normally done back then with incredibly procedural scripts using unit-testing tools).
It doesn't really matter whether you're using JUnit, NUnit, or any other kind of unit-testing framework at a low-level. If you're thinking in terms of examples of behaviour, and talking through those examples, then you're doing BDD. Writing the examples down is useful, and automating them is useful too, and that's why there are BDD tools like JBehave and Cucumber which work at a system level.
We don't really need specific BDD tools at a lower level, because it's pretty easy for developers and testers to map "test" to "example" or "should" without worrying about it.
The difference is in the ease with which people can have conversations, without using the word "test".
When you're writing BDD for a class, the "user" of that class is usually another class. So, in the same way you write scenarios based on what the user of a system can see, you write lower-level examples based on what another class can see. The tests that come out of BDD are a very nice by-product, but the conversation is the most important aspect, even if that conversation has to happen with a rubber duck.
BDD doesn’t replace TDD, I would argue that it complements BDD.
It all depends on the question, will the business care about this or not?
You want to use the right tool for the job. That is, some stuff fits very well for BDD and some other things fit better with TDD. When done properly, the line between BDD and TDD is very blurred. If it even exists.
I wrote a blog post where I discuss a concept called “Testing Iceberg”, http://www.thinkcode.se/blog/2016/07/25/the-right-tool-for-the-job
It may share some additional light to your question.
And I agree with #Lunivore that the choice of tool is less important compared with the understanding of the problem that you get from conversations.
Related
Background:
I'm currently creating my first game engine project for learning purposes. This is the first larger project I've ever done. I knew going in to the project the sort of higher level details of what's involved in a game engine (such as separate systems - input, graphics, audio, physics, etc.) but when getting into the nitty gritty I'm kindof figuring things out as I go. My coding process right now looks like this:
1.) Figure out some quick, higher level details of the system I want to design
2.) Start some experimental coding to see how exactly things need to work
3.) Add tests afterwards once I'm feeling good about what I have.
Since I'm pretty unfamiliar with the problem domain (game engine programming) I find I really need to experiment in code to see what functions I need and what design suites best. However, I know most of the community (so it seems anyway) usually advocates more of a TDD approaches to building applications. I can see the benefit of this but I'm not quite sure how I would apply the "Write test first, fail, then pass" when I'm not really even sure what functions I really need to be testing yet. Even if I can think of 1 or 2 definite functions, what if during the experimenting phase I find it's better to split those functions out into more functions across different classes. Then I would have to redesign both my code and my tests constantly.
My Issue/Question(s):
So, is there a good way to go about using a TDD approach when your experimenting in code? Or is TDD generally meant for those who are familiar with the projects their working on, and have an idea of what the design needs to be or what functions they will be needing to test?
A common approach when using TDD in unfamiliar territory:
Do some experimental coding to learn more
Set aside the experimental code and start the TDD "test first" approach
As you write production code to make your tests pass, you'll be harvesting pieces of your experimental code.
The benefit of this approach is that with the TDD "red,green,refactor" cycle, you'll usually be improving the design of your experimental code. Without it, your experimental code can often end up as a "big ball of mud".
I have found that when I let "I'm not comfortable writing unit tests in this situation" be an excuse for not writing unit tests... I never write any unit tests.
If you know enough about the behavior of something to code it, you know enough to test it.
Even if I can think of 1 or 2 definite functions, what if during the
experimenting phase I find it's better to split those functions out
into more functions across different classes
If you already have the original function tested, and you're only changing the interface, then there should be little test logic that would have to change. And there's no risk of breaking something when you change the interface afterwards. Further, this concern is not specific to experimenting with a new domain.
Perhaps an unfamiliar domain is not the best place to start learning TDD. But I would certainly not say it is inherently unsuitable to the process.
I'm writing a short paper to expound the benefits of unit testing and TDD. I've included a short section at the end entitled "Beyond TDD" in which I'd hoped to cover a few different approaches based on TDD, BDD and ATDD in particular.
I'm sort of familiar with BDD (I've played with SpecFlow) but after reading up on ATDD, it sounds very similar. Are BDD and ATDD just two names for what is essentially the same process - document the behaviours in a 'ubiquitous' language', generate an automated acceptance test suite, then go about making the acceptance tests pass?
While I agree generally with gishu's post, there are a couple of areas in which I disagree. In the IMHO section, he presents BDD specification as the user story specification developed by Rachel Davies et al: As a... I want... so that.
The BDD specification is Given... When... Then... as in
Given that the user is logged in, when the user clicks on x, then we should see Y.
This is about conditions, actions, and expectations, and is core to BDD.
ATDD is, as gishu suggests, the practice of driving development through the use of acceptance test specifications, implemented as executable acceptance criteria. The specification, in the BDD form, is neither required nor "best practice." In practice, however, it is effective in focusing the thinking and language on how to verify that the work has been done satisfactorily and meets requirements.
Note that BDD is not particularly based on TDD. ATDD is loosely based on TDD in that it is testing that is done before development is done. Beyond that, it is not focused on developer work, but on the overall direction and validation of the project. ATDD meshes well with Story Mapping, in that it plays well during the discovery phase when higher level requirements are being written, and it's important to know "how will we know when it has been done properly?"
BDD (Dan North, Dave Astels, Dave Chelimsky, et. al) is a movement to make the whole delivery process agile.
That said, teams doing BDD would be employing the practice of ATDD - i.e. the process of starting with executable specifications of acceptance criteria. An effective graphic to put the point across is where ATDD wraps the inner cycle of TDD.
ATDD is just the practice of starting with executable acceptance criteria before development and using it to shape the design of the underlying code base (much like TDD but at a more chunkier level).
What follows is totally an opinion and may not be entirely accurate:
You could be doing ATDD but still not be doing BDD:
e.g. I could be writing automated acceptance tests but which are not readable.. which do not convey intent. I could be writing a comprehensive suite of automated 'regression' tests but which do not tell me what does the system does/ how does it work.
BDD stresses on language and communication strongly. e.g. specifying behavior i.e. instead of saying
testXDoesY
BDD would specify it as
As a StakeHolder, X should do Y so that I can Z.
So to close, I think the major difference (which may occur but doesn't have to) is that ATDD could turn into a comprehensive automated suite that just acts as a target for active development + regression. BDD would implore you to move the needle further onto shared language between problem and solution domains + living documentation via executable examples that enables future constructive conversation
ATDD is often used synonymously with Behavior Driven Development (BDD), Story Test Driven Development (SDD) and Specification By Example. The main distinction of ATDD compared to other agile approaches is, its focus on making developer, testers, business, product owners and other stakeholders collaborate and come up with a clear understanding of what needs to be implemented.
I personally like the concept of ATDD as it aligns with the “Shift Left paradigm” where development and testing should start as early as possible in the SDLC. It helps to bring more visibility into automation as we start writing automated tests right from the beginning of the SDLC and in turn helps in increased collaboration within the team.
Remember, ATDD is not one all-fits-all kind of solution. It is one of the agile approaches. There are various other ways to help to improve processes in teams but I specifically found this approach to focus on better acceptance tests and most importantly its emphasize on collaboration; which is the integral part of this approach.
I would say that nothing much. My first assumption would be that ATDD, BDD, specification by example, agile acceptance testing, etc. all mean the same thing. If someone uses these terms so that they would mean separate things, they better explain the difference in that context.
I'm new to TDD approach so I'm wondering if anyone experienced wit this could enlighten me a little. I would like to get some leads how to use UML and TDD methodology together.
I've been used to: Design with UML --> Generate skeleton classes (and then keep it synchronized) -> Implement and finally Test. I must admit that testing part was the worst one, so I started to look for something else - TDD. So I have some general knowledge what is it about but before I proceed further, I am interested knowing how it goes together with software design especially UML.
So when I first design/create test, how can UML fit in? Would it be possible to design classes first, from them create skeleton classes, from them generate Unit tests which would be "filled" before actual implementation of UML pregenerated classes, would this approach break whole TDD? Or is there any other way that would keep UML and TDD together?
The TDD cycle is test, code, refactor, (repeat) and then ship. As the TDD name implies, the development process is driven by testing, specifically that means writing tests first before developing or writing code.
The first paragraph is purely definitional ... from how Kent Beck defines TDD ... or how Wikipedians generally understand TDD ... I thought it was necessary to belabor the point with this definition, because I am not certain whether everyone is really discussing the same TDD or if others really understand the implications the [most-important part or the] definiton of TDD, the implications of writing tests first. In other words, I think more of the focus of the answers to this question should delve a bit deeper into TDD, rather than explaining a bias for/against UML. The lengthy part of my answer relates to my opinion on using UML to support TDD ... UML is just a modelling language, certainly not required to do TDD; it could get in the way if applied inappropriately ... but UML can help with understanding requirements to write tests, how modeling can help refactor if needed and how collecting the artifacts of the process speeds the documentation of shipped code. I would welcome any comments, criticisms or suggestions, but please don't vote my answer up or down because you agree or don't agree with the first paragraph ... the TDD acronym is Test-Driven Development which means Test-First Development.
Writing a test first implies that the developer understands the specifications and requirements FIRST ... obviously, any test is written should fail until the code gets written, but in TDD, the test must be written first -- you can't do TDD without being focused on understanding a requirements specification before you write tests, before you write code. There are situations where the requirements do not exist at all; requirements elicitation involves a bit of hacking a pre-pre-alpha version to "throw mud at the wall to see what sticks" ... that sort of thing should not be confused with development, certainly not test-driven development, it's basically just one form of requirements-elicitation for a poorly-understood application domain.
UML diagrams are one form of requirements input to TDD. Not the only one, but probably better than written specifications if people who are knowledgeable in creating UML diagrams are available. It is often better work with visual diagrams for better communication in exploring the problem domain [with users/clients/other systems providers] during pre-implementation requirements modeling sessions ... where simulating performance is necessary for really understanding requirements (e.g. CANbus network interactions); it is often ideal if we can work with a specification language or CASE tool like Rhapsody or Simulink RT Workshop that can be executable, modular and complete ... UML is not necessarily part of TDD, but it is part of an approach design synthesis that involves expending more effort understanding what is required first before writing any code [and then wasting time trying to market that code to someone who cares]; generally, more focus on understanding the systems requirements lays the groundwork for more efficient, productive agile TDD iterations.
Refactoring is about improving design -- cleaning up working, tested code to make it simpler, easier to maintain. You want to tighten it up as much as possible to remove obfuscated tangles where for bugs might be hiding or could spawn in future releases -- you don't refactor because a customer requires it; you refactor because it's cheaper to ship clean code rather than to continue to pay the cost of supporting/maintaining complexicated mess. Generally, most of TDD is more code-focused; but, you could employ UML for looking at larger scope or taking a step back to look at the problem, e.g. creating a Class diagram to help identify [missing] tests or possible refactorings. This is not something you'd need to mandate or want to do on a general basis, but where appropriate.
The last step, 'ship' is a serious step ... 'ship' is not shorthand for "toss it over the wall and forget it, because good code doesn't need support" or "abandon and hope that there are no more iterations." From a financial or business perspective, shipping is the most important step of TDD, because it is where you get paid. Shipping does a involve "shifting gears" because it includes systems integration, preparation for support and maintenance, getting ready for the next round of development, etc. The primary use of UML diagrams will be to communicate [in abstract terms] how the code does what it does ... UML is useful because hopefully the diagrams are an artifact of the requirements and development processes; it's not necessary to start from scratch when the code ships ... as a communication tool, UML would be appropriate for reducing integration errors multi-module systems, larger projects that might involve modules written in different languages, networks of embedded systems where different companies must collaborate on safety-critical systems but need the abstraction to be stingy with or protective of their "proprietary knowledge."
Just as you should avoid using big hammer in situations where a tiny screw driver is appropriate OR you aren't going to get anywhere by asking all developers to standardize on using Emacs as their editor. Sometimes the view is not worth the climb -- you don't want to always haul out the UML banner or become known a the guy who was alway pushing UML ... particularly not in situations where there is no substitute for writing tests or reading code. If you stick to appropriate situations, you should not be afraid to use UML as a communication language in all situations where the language helps you.
So when I first design/create test,
how can UML fit in? Would it be
possible to design classes first, from
them create skeleton classes, from
them generate Unit tests which would
be "filled" before actual
implementation of UML pregenerated
classes, would this approach break
whole TDD? Or is there any other way
that would keep UML and TDD together?
If you create an entire skeleton class - by which I assume you mean a class with all methods defined but empty - before writing your first test, then I would say you are not doing TDD, and lose the benefits of TDD. As we do TDD - Test-Driven Design - our tests incrementally lead us to the next elements - methods and classes - that our program needs. If you have predetermined, pre-specified in UML, what your classes and methods are, a great deal of your design is already done, and either your subsequent development is constrained to follow it, or you have wasted effort that subsequent work will undo.
There may be ways to use both UML and TDD together for design, but as you have described it, you are doing the design within UML, before TDD ever gets a chance. That won't give you the full benefit of TDD.
Or is there any other way that would keep UML and TDD together?
UML and TDD fit wonderfully together:
Make the initial design in UML - it doesn't have to be complete, just consistent, self-contained
Create the empty tests. This step could also be automatized
All tests will fail at first, as required by TDD (because the generated code from UML does not have any code)
Start writing tests for each class
Start with classes which do not have a great deal of associations if you are confident with your software architecture and UML skills (no, you're not doing waterfall, but sometimes you just know what you're doing - you know the application domain already or you have used expert knowledge at step 1)
Start with classes which have a lot of associations ("central classes") if you are NOT confident in your understanding of the application domain - this will make it easier to eliminate as soon as possible bad design decisions, because you will notice them as early as possible
... The tests are still failing
In parallel to each unit being tested (step 4), write the implementation inside the empty method bodies. DO NOT modify any class, interface or method names or parameter signatures. You ARE only allowed to add private helper methods, NOT more
If at step 6 (which is run in tandem with step 4) you realize you need to make changes in the design:
Go to step 1 and refine the UML, then generate the code again (good UML tools will not overwrite your implementation). Attention: avoid introducing new classes. You want to finish step 13 within a few weeks
Re-run the tests and fix the ones failing which were previously OK
Continue with what you left at step 6
Go to step 6 if not all class tests pass
Proceed to component, package and subsystem tests
Add deployment to UML and deploy to the integration environment (http://en.wikipedia.org/wiki/Development_environment_%28software_development_process%29)
Proceed to integration tests
Go through the test/QA stage
Go through the User Acceptance Testing
Reiterate the previous steps as required (according to your iterative development process)
... months pass
Deploy version 1.0.0 to production
Do not try to do arrive at many design decisions at step 1 or following reiterations of step 1 (refining the design). You want to finish step 13 in the first iteration after a few weeks.
While some people think UML is a design methodology, it is first and foremost a communication tool. Hence the name, Unified Modeling Language. The idea is to have common vocabulary (of shapes) that you can insert in a book and everybody will understand.
TDD on the other hand is a design methodology, the way to construct the system starting from its interface and its consumer, and only then adding the implementation.
Once your design has emerged as a result of applying TDD, you can communicate that design using UML. If you don't need to communicate (like if you're the only developer), you arguably don't need UML.
Some people think of domain analysis (identifying key Nouns, Verbs and Adjectives and building a basic ontological model) as being a part of UML methodology, reflected in the use case & ER diagrams... This may be useful to do before you jump into TDD red/green/refactor cycle. Then again, strictly speaking this is DDD (Domain Driven Design), not UML per se.
If you're designing classes using UML, you're designing what you think the code will look like. But what you create using TDD is the reality.
Leave UML until the end, to document the results of the code you created using TDD.
UML is the design part, of course.
I'd only use it on the system I was ending up with. Rendering test classes in UML seems ludicrous to me.
I'd use it to get a general sense of what I was creating. Then I'd start with TDD to implement it.
As far as keeping it in synch, I'd get a UML tool that was capable of import and export. I'd worry more about the code and less about UML. It's useful for capturing your thoughts and documenting later, but not much else. I'd always prefer working, tested code over diagrams any day.
I keep hearing how Test Driven Development is an unfortunate name for the technique because TDD is not about Testing, it's about design.
Personally, I've never had a problem with the name simply because I've always thought about it more as a way to program to a specification. I've never had a problem with the fact that the code that is verifying that I've met the spec has the word 'test' in a class name, method name, annotation, or attribute. It's a convention I've followed in order to have test frameworks do the heavy lifting for me.
I'm not the most dogmatic about my approach, although I do try to write my tests first, I often find myself inspired to write some extra code which I then try to wrap in tests. I do tend to find that TDD outsmarts me from an API-design perspective every time I do that though (and I inevitably end up refactoring my non-tested code as soon as I start writing tests around it), but the point is I'm okay with not writing all tests up-front as long as I end up with a test harness around everything of interest when I'm done.
So, back to the question. What would be a better name for TDD? Would a name that didn't involve the word 'test' make a difference?
Test driven development is not a way of testing, it's a way of developing (including design, coding and testing). Otherwise it would be called development driven testing.
The verb of the phrase "test driven development" is development, "test-driven" is just the adverb. You should explain that to whoever is giving you grief over the term.
I have absolutely no problem with the name since it accurately reflects the intent. And you don't have to have all your tests finished before starting development but you should have them specified at least.
I think that's what Behavior-Driven Development tried to achieve (among other things).
I've stopped caring about definitive names -- it's all open to interpretation.
In my (subjective) opinion, I believe that the reason some people started the campaign against the word 'Test' in Test-Driven Development was because of bad experience with explaining the concept to certain types of developers.
There are developers who, as soon as they hear the word 'Test', stop listening - they think it's not their job, so why should they care? That attitude is obviously detrimental if you need people to adopt the practice of TDD. If one solution to that problem is to rename TDD into something that does not include the word 'Test', then fine by me, but for the rest of us, let's just continue to call it TDD.
I've seen at least the following terms suggested:
Test-Driven Development (the original term)
Test-Driven Design (because it's more about design)
Example-Driven Design (because tests could be viewed as examples of how the code is supposed to work)
Design-by-Example (just another phrase for EDD)
Behavior-Driven Design (the declared intent was different, but most attempts so far has looked a lot like TDD with some extra verbosity added)
Even though 'Design' is very important to me, I still think the tests provide value. Sure, they are not QA tests, but the entire mass of tests as a by-product of TDD still provide the safety net that allow you to refactor with confidence. In fact, Refactoring has an entire section on unit testing, because Fowler viewed it to be very dangerous to attempt refactoring without a regression test suite.
Anyone really, truly believing that the tests are of no importance in TDD should put their money where their mouth is and delete the unit tests when their code is done.
For me, TDD makes sense. Yes, it is first and foremost about Development and Design, but the Tests are important artifacts too.
I think Test Driven Development is fine. That's what it is. Yes, it affects the design. Yes, it improves the quality. Yes, it improves the overall maintainability and ability to refactor. Yes it's faster. Etc.
I think of BDD as a different, more specific practice. But maybe it's more in line with the original intent of TDD than tools like JUnit promoted.
Is it a bad name because it speaks more towards the process than the benefits? Maybe.
Other ideas:
responsible develoment
test first development
holistic develompent
not hacking
I think "Code-By-Example" is the best name for what we do. I think that's what Dan North calls it. Code By Example is then a part of BDD (so BDD is larger/more specific that TDD).
The concept of "Context Specifications" is also quite similar, promoted by people like Scott Bellware.
I agree that using the word "test" makes it difficult to communicate the technique to new developers.
I'm looking for resources that provide an actual lesson plan or path to encourage and reinforce programming practices such as TDD and mocking. There are plenty of resources that show examples, but I'm looking for something that actually provides a progression that allows the concepts to be learned instead of forcing emulation.
My primary goal is speeding up the process for someone to understand the concepts behind TDD and actually be effective at implementing them. Are there any free resources like this?
It's a difficult thing to encourage because it can be perceived (quite fairly) as a sea-change; not so much a progression to a goal but an entirely different approach to things.
The short-list of advice is:
You need to be the leader, you need to become proficient before you can convince others to, you need to be able to show others the path and settle their uncertainties.
First become proficient in writing unit tests yourself
Practice writing tests for existing methods. You'll probably beat your head on the desk trying to test lots of your code--it's not because testing is hard or you can't understand testing; it's more likely because your existing code and coding style isn't very testable.
If you have a hard time getting started then find the simplest methods you can and use them as a starting point.
Then focus on improving the testability of the code you produce
The single biggest tip: make things smaller and more to the point. This one is the big change--this is the hardest part to get yourself to do, and even harder to convince others of.
Personally I had my "moment of clarity" while reading Bob Martin's "Clean Code" book; an early chapter talks about what a clean method will look like and as an example he takes a ~40 line method that visually resembled something I'd produce and refactors it out into a class which is barely larger line-count wise but consists of nothing but bite-sized methods that are perhaps 3-7 lines each.
Looking at these itty-bitty methods it suddenly clicked that the unit-testing cornerstone "each test only tests one thing" is easiest to achieve when your methods only do one thing (and do that one thing without having 30 internal mechanisms at play).
The good thing is that you can begin to apply your findings immediately; practice writing small methods and small classes and testing along the way. You'll probably start out slow, and hit a few snags fairly quickly, but the first couple months will help get you pointed in the right direction.
You could try attending (or hosting one if there is none near you!) a coding dojo
I attended one such excercise and it was fun learning TDD.
Books are always a good resource - even though not free - they may be worth your time searching for the good free resources - for the money those books cost.
"Test driven development by example" by Kent Beck.
"Test Driven Development in Microsoft .NET" by James W. Newkirk and Alexei A. Vorontsov
please feel free to add to this list
One thing I worked through that helped me appreciate TDD more was NHibernate and the Unit of Work Pattern. Although it's specific to NHibernate and .NET, I liked the way that it was arranged. Using TDD, you develop something (a UnitofWork) that's actually useful rather than some simple "this is what a mock looks like" example.
How I learn a concept best is by putting it to use towards an actual need. I suggest you take a look at the structure of the article and see if it's along the lines of what you're looking for.
Geeks are excellent at working to metrics, whether they are good for them or not!
You can use this to your advantage. Set up a CI server and fail the build whenever code coverages drops below 50 percent. Let them know that the threshold will rise 10 percent every month until it's 90. You could perhaps use some commit hooks to stop them being able to check code in to begin with but I've never tried this myself.
Let them know the coverage by the team will be taken into effect in any performance reviews, etc. By emphasising it is the coverage of the team, you should get peer pressure helping you ensure good coverage.
This will only ensure they are testing their code, not how well they are testing their code, nor whether they are writing the tests first. However, it is strongly encouraging (or forcing) them to incorporate testing into their daily development process.
Generally, once people have something in their process they'll want to do something as easily/ efficiently as possible. TDD is the easiest way to write code with high coverage as you don't write a line of code without it being covered.
Find someone with experience and talk to them. If there isn't a local developer group, then start one.
You should also try pushing things too far to start with, and then learn when to back off. For example, the whole mock thing started when someone asked "What if we program with no getters".
Finally, learn to "listen to the tests". When the tests look dreadful, consider whether it's the code that's at fault, not your testing technique.