TDD & BDD? Which, Why and How? [closed] - tdd

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
TDD & BDD? Which, Why and How?
Can anyone give a good explanation to justify "Which", "Why" and "How" on both?
Thanks in advance.

TDD is used more for unit testing e.g. testing a method on a class. BDD is used for testing the behaviour of a system e.g. Creating a user, or Sending out new product emails.
So for TDD you might see something like.
public void Test()
{
// Arrange.
var sut = new ClassToTest();
// Act.
int result = sut.SoSomething();
// Assert.
Assert.Equal(result, 23);
}
With BDD (depending on the tools you're using) you tend to see something like this:
Feature: Add a user
As a system admin
In order to give a user access to the site
I want to create a user account
Scenario: Creating a basic user
Given I have the user's name
When I create a new user account
Then that user can log onto the site
As you can, BDD is testing the behaviour of a system rather then single unit. Here is a very good intro to BDD by Dan North - http://dannorth.net/introducing-bdd/
I would recommend using TDD when you are building your classes/code and want to testing little bits of it at a time. Use BDD when you want to test more then one of those classes in a test i.e. integration test.
EDIT:
With the how side of things, for BDD I would recommend using SpecFlow. This is a popular BDD tool which adds a lot of functionality to Visual Studio for creating feature files (The Feature: stuff I mentioned above) and running and debugging the tests.
Under the hood SpecFlow can use NUnit or MSTest to generate the tests. Other BDD tools include:
MSpec
NSpec
SpecsFor
StoryQ
and many others I've forgotten about right now :) I would suggest you try them out and see which one you prefer.
For TDD you have many options including:
NUNit
xUnit
MSTest
A lot of the above tools can installed via NuGet in Visual Studio, which is handy.

Related

Web Application Tests Visualization [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm testing a web application with several functionalities. I didn't develop this application, I'm a mere tester. I'm new to testing and when I started working on it I was given a series a Selenium Tests. These tests are performed by using Selenium IDE on Firefox.
They're very easy to perform in this way, cause you can just record and store variables and run the test.
The problem is that when, for example:
* a web page has a table with 3 rows, I prepare the test for this scenario, it works
* the day after the table has 4 rows so my test doesn't work anymore,
* Selenium doesn't allow me to make a for-cycle over rows or columns
That's why I thought I could export the several Selenium test to Java 4 Web Driver and import them in Eclipse. In this way I'm able to improve the code, by adding for cycles and other stuff.
I created a project for my test suite, imported the JUnit 4 and Selenium library, ran the suite and everything was alright, no errors. It was great.
*The thing is: how I do I manage to VISUALIZE (or Playback) on Firefox Browser my tests, as I was doing earlier in Selenium IDE?? *
Thx a lot
When you run the tests in Eclipse, it will open a Firefox window and perform the tests just as it would in the IDE.
As for actual test management (i.e starting/stopping tests) you will need to invest in some CI software. TeamCity, Jenkins or something like that will work.
As for picking elements out (the 'find' option in Selenium IDE), this is much more tricky as you've just ditched the IDE and are very new to Selenium. It is, however, easily done.
You can run XPath and CSS queries directly into Firefox's Console (or Firebug, if you wish). Thus, you can still run the same queries you would in the IDE, the only difference is how the results are returned to. Selenium IDE will 'highlight' an element, whereas the console will return it as a DOM object.
If none of the above helps, please go into more detail for what you are requiring, but I've covered what the IDE allows you to do and what are the alternatives.

newbie - Best patterns and tools for a real asp.net mvc3 application [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm studing how to build a mvc 3 web application and I am very confused.
I spent the last few days reading about DDD, DI, IoC, TDD, EF (and tools like castle mapper, automapper, windsor, ecc) but I need your experience to make up my mind and choose a path to follow.
What I know:
- I want to use MVC 3.0
- I want to use EF 4.1 or 4.2
- I don't want to use Unit Tests for the moment but I want the structure of my project to support them in the future with few modification.
- I want to heavily use jQuery (the application need to be "very ajax")
- I obviously don't want my controllers and views to directly use EF objects
- I don't want to write duplicate code (ie: if I have a "person" db table with a "FirstName" property, I don't want to create a class for each layer of the software [dal, bll, ui, ...] with the same "person" data. Adding a field to the database should not need to add a property to way to many classes)
What I'd like to know:
- Which pattern(s) would you use?
- Best way of organizing projects inside the vs2010 solution?
- Code first or database first?
Last but not least: Is it possible to use all the cool features of mvc (data annotation, validation, ecc) with a heavily ajaxed site?
Of course I don't expect a fully detailed answer: I just need some pointers/help/link to go in the right direction and study what I need.
Thanks in advance.
To describe your situation: You want to use a couple of frameworks and want to use as much of the best practices/patterns out there. You will fail. Your job is to build working software and not to use as much patterns as possible for your job.
Some "high level advice":
DDD: don't do it! It does make sense in some projects but that often as people would think
TDD: go for it to improve your design
Patterns: When you have a solution for something or a design idea, check out if there is a pattern that describes that idea and not the other way round.
Avoid some patterns: Singleton, facade under some conditions,...
Take a look at SOLID and read Clean Code
Well. I've seen a lot of overdesigned applications which makes maintenance a nightmare. I would advice you to start by just following seperated interface pattern and Single Responsibility Principle. Those two combined makes it easy to maintain and refactor code in future versions.
Unit testing is also a good thing. Not just to make sure that your application work, but to make you rethink your design. A class that is hard to test is most likely badly designed. The most common solution is to break down the class into smaller classes with more clear responsibilites.
I usually try to use blackbox testing when writing unit tests. That is to try to NOT look in the class that the tests are for. If I can't figure out how a class is working by looking at the contract (method definitions), I might have to refactor it.

How to treat future requirements in terms of TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
While attempting to adopt more TDD practices lately on a project I've run into to a situation regarding tests that cover future requirements which has me curious about how others are solving this problem.
Say for example I'm developing an application called SuperUberReporting and the current release is 1.4. As I'm developing features which are to be included in SuperUberReporting 1.5 I write a test for a new file export feature that will allow exporting report results to a CSV file. While writing that test it occurs to me that the feature to support exports to some other formats are slated for later versions 1.6, 1.7, and 1.9 which are documented in a issue tracking software. Now the question that I'm faced with is whether I should write up tests for these other formats or should I wait until I actually implement those features? This question hits at something a bit more fundamental about TDD which I would like to ask more broadly.
Can/should tests be written up front as soon as requirements are known or should the degree of stability of the requirements somehow determine whether a test should be written or not?
More generally, how far in advance should tests be written? Is it OK to write a test that will fail for two years until the that feature is slated to be implemented? If so then how would one organize their tests to separate tests that are required to pass versus those that are not yet required to pass? I'm currently using NUnit for a .NET project so I don't mind specifics since they may better demonstrate how to accomplish such organization.
If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.

Are there any example or resources or even framework which consists of Asp.net MVP + Sandcastle + TDD/Nunit + Fitnesse?

our dev team is currently using asp.net 2.0 and after a lot of browsing and cross site referencing i found that the new in thing is the asp.net MVC but found that there's a few things that it can't do such as support asp.net controls, view state.
i'm not sure what are the other limitation besides the total change of paradigm where each page will now link to the controller which will be linked to a certain view. so in order to make the learning curve to be less steep, i wanted to pick up on MVP first as i think by just being able to take out the application and domain layer out and make them testable is already a big help to our total process without being too much of a hassle.
after more browsing around, i find that the ndoc is a bit outdated now and is being replaced by sandcastle which has an additional add in call docproject so that should covers the auto generation of the documentation in the codes very well.
and to handle the acceptance test, i find this tool call fitnesse which is based on FIT which should helps.
so being totally new to all of this, i'm wondering if this is a good process overall to have this tool in to cover our team's development process. and if there's other sample/resources/framework out there which covers all of these steps and does a better job than trying to piece in the gap by using several tools, i.e. a framework?
basically my question is is my
overall process above well covered
by the tools that i've researched?
and is there a better way to do the
asp.net tdd + auto doc generation +
acceptance testing?
any advice/feedback is appreciated.
thanks!! :)
Yes, ASP.NET MVC with NUnit and FitNesse are reasonable choices for an 'agile' approach. Just not sure where auto-doc generation fits into this. Will anyone read this generated documentation or will they just look at the code? If you haven't read it yet, get Robert Martin's 'Clean Code' for some good tips on how to make code maintainable and understandable without lots of comments and generated documents.

NUnit best practice

Environment: (C# WinForms application in Visual Studio Professional 2008)
I've been digging around a little for guidance on NUnit best practices. As a solo programmer working in a relatively isolated environment I'm hoping that collective wisdom here can help me.
Scott White has a few good starting points here but I'm not sure I totally agree with everything he's said -- particularly point 2. My instincts tell me that the closer a test is to the code being tested the more likely you are to get complete test coverage. In the comments to Scott's blog posting is a remark that just testing the public interface is considered best practice by some, but I would argue the test framework is not a typical class consumer.
What can you recommend as best practices for NUnit?
If by point 2, you mean the "bin folder per solution" -- I can see your point. Personally, I would simply add the reference to each test project. If, on the other hand, you really mean (1b) "don't put your tests in the same assembly as your code" I heartily agree with him and disagree with you. Your tests should be distinct from your production code in order to enhance code clarity and organization. Keeping your test classes separate helps the next programmer understand it more easily. If you need access to internals in your tests -- and you might since internal methods are "public" to the assembly, you can use the InternalsVisibleTo construct in the Assembly.cs file.
I, too, would recommend that, in general, it is sufficient to unit test only the public interface of the code. Done properly (using TDD), the private methods of your code will simply be refactorings of previous public code and will have sufficient test coverage through the public methods. Of course, this is a guideline not a law so there will be times that you might want to test a private method. In those instances, you can create an accessor and use reflection to invoke the private method.
Another recommendation that I would make is to use unit testing and code coverage in tandem. Code coverage can be a useful heuristic to identify when you need more tests. Lack of coverage should be used as a guide to indicate where more testing may be needed. This isn't to say that you need 100% coverage -- some code may be simple enough not to warrant a unit test (automatic properties, for instance) and they may not be touched by your existing tests.
There were a couple of issues that I had with the article. Probably the biggest is the lack of abstraction away from the database for unit tests. There probably are some integration tests that need to go against the db -- perhaps when testing trigger or constraint functionality if you can't convince yourself of their correctness otherwise. In general, though, I'm of the opinion that you should implement your data access as interfaces, then mock out the actual implementations in your unit tests so that there is no need to actually connect to the database. I find that my tests run faster, and thus I run them more often when I do this. Building up a "fake" database interface might take a little while but can be reused as long as you stick with the same design pattern for your data access.
Lastly, I would recommend using nUnit with TestDriven.Net - a very useful plugin whether you're doing nUnit or MSTest. Makes it very handy to run or debug tests with a right-click context menu.
My instincts tell me that the closer a
test is to the code being tested the
more likely you are to get complete
test coverage. In the comments to
Scott's blog posting is a remark that
just testing the public interface is
considered best practice by some, but
I would argue the test framework is
not a typical class consumer.
If your code cannot be tested using only public entry points, then you have a design problem. You should read more about TDD and SOLID principles (especially single responsibility principle and dependency inversion). Then you will understand that following TDD approach will help you write more testable, flexible and maintainable code, without the need for using such "hacks" as testing classes' private parts.
I also highly recommend reading Google's guide to testability by Miško Hevery, it has plenty of code samples which cover these topics.
I'm in a fairly similar situation and this question describes what I do keep-your-source-close-and-your-unit-tests-closer. There weren't too many others enamoured with my approach but it works perfectly for me.

Resources