I'm preparing to create my first Unit Test, or at least that's how I was thinking of it. After reading up on unit testing this weekend I suspect I'm actually wanting to do Integration Testing. I have a black box component from a 3rd party vendor (e.g. a digital scale API) and I want to create tests to test it's usage in my application. My goal is to determine if a newly released version of said component is working correctly when integrated into my application.
The use of this component is buried deep in my application's code and the methods that utilize it would be very difficult to unit test without extensive refactoring which I can't do at this time. I plan to, eventually.
Considering this fact I was planning to write custom Unit Tests (i.e. no derived from one of my classes methods or properties) to put this 3rd party component through the same operations that my application will require from it. I do suspect that I'm circumventing a significant benefit of Unit Testing by doing it this way, but as I said earlier I can't stop and refactor this particular part of my application at this time.
I'm left wondering if I can still write Unit Tests (using Visual Studio) to test this component or is that going against best practices? From my reading it seems that the Unit Testing tools in Visual Studio are very much designed to do just that - unit test methods and properties of a component.
I'm going in circles in my head, I can't determine if what I want is a Unit Test (of the 3rd party component) or an Integration Test? I'm drawn to Unit Tests because it's a managed system to execute tests, but I don't know if they are appropriate for what I'm trying to do.
Your plan of putting tests around the 3rd party component, to prove that it does what you think it does (what the rest of your system needs it to do) is a good idea. This way when you upgrade the component you can tell quickly if it has changed in ways that mean your system will need to change. This would be an Integration Contract Test between that component and the rest of your system.
Going forward it would behoove you to put that 3rd party component behind an interface upon which the other components of your system depend. Then those other parts can be tested in isolation from the 3rd party component.
I'd refer to Micheal Feathers' Working Effectively with Legacy Code for information on ways to go about adding unit tests to code which is not factored well for unit tests.
Testing the 3rd party component the way you are doing it is certainly not against best practices.
Such a test would, however, be classified as a (sub-)system test, since a) the 3rd party component is tested as an isolated (sub-)system, and, b) your testing goal is to validate the behaviour on API level rather than on testing the lower level implementation aspects.
The test would definitely not be classified as an integration test, because you are simply not testing the component together with your code. That is, you will for example not find out if your component uses the 3rd party component in a way that violates the expectations of the 3rd party component.
That said, I would like to make two points:
The fact that a test is not a unit-test does not make it less valuable. I have encountered situations where I told people that their tests were not unit-tests, and they got angry at me because they thought I wanted to tell them that their tests did not make sense - an unfortunate misunderstanding.
To what category a test belongs is not defined by technicalities like which testing framework you are using. It is rather defined by the goals you want to achieve with the test, for example, which types of errors you want to find.
Related
As the title says, we use Pega extensively, and was wondering, whether it is possible to implement TDD in the same fashion as .NET or Java.
It depends on version of Pega platform you are using.
Prior to Pega 7.2.2 test cases used to be created through running a Rule and recording Clipboard state before and after Rule run. Initial state recorded was used to setup environment for every test case run, final state recorded was assumed as expected reference state to get after each run. There was no convenient way to configure this.
As so, it was impossible to implement TDD using built-in Pega test case capabilities, because you had to implement your rule completely before creating a test case for it.
In Pega 7.2.2 you can manage the way environment is set up for a test case run and assertions made. But be aware that Pega test cases still lack rule dependency isolation, thus you cannot test a Rule in isolation.
We are using Pega extensively as well, so given aforementioned restrictions we decided to create our own testing framework for Pega.
The problem of unit testing Pega applications I've described in more details in the following article.
https://www.linkedin.com/pulse/gaining-confidence-comprehensive-continuous-pega-7-unit-lutay
I am trying to optimize the current Automation testing we use for our application. We currently use a combination of Selenium and Cucumber.
Right now the layers we use are:
TEST CASE -> SELENIUM -> Browser.
I have seen recommendations that its better to use TEST CASE -> FRAMEWORK -> SELENIUM -> BROWSER, that way when changes happen in the UI you only need to update the framework and not each test case.
The Question is our scripts are currently broken up into individual steps so when changes to UI happen we only update a script or two, is it better to use this approach with
several scripts that execute for each test case
or go to the framework approach
where the classes, methods, etc. reside in the framework and the test cases just call the methods with parameters for each step?
It depends on:
the life cycle of your testing project, a project with a long life cycle is more worthy to develop a framework for than a short one.
how often you need to update your test cases ( which in turn depends on how often those web pages under test change), a volatile webpage will demand its test scripts to be updated more regularly. Having a framework improves maintainability. (that is, if this framework is well written).
Introduce a framework has the following pros and cons:
pros: easier maintenance, you no longer need to modify your code in multiple test cases, this will save your effort and time. And you get to re-use your framework over and over again for future projects, which will save you time and effort in a long run.
cons: will have development overhead, extra money and effort are required to achieve it. If this project is small and short, the effort and money you spend on introducing a framework may even out-weight its benefits.
lets say I have three projects in my solution.
1 Is a ASP.Net project simply printing an output
2 Is a PHP project using VS.PHP which simply prints an output (Same output as the ASP.Net project. Just in different environment)
3 A C# Console project which use the above two projects as server and parse their responses.
Now I want to add an other project named "Test" and fill it with unit tests mainly for testing the integrity of the solution.
I am new to unit tests but my main problem here is not about them. It is about this simple question that: How can I run the two first projects (Using VS.Php Webserver for PHP and IIS Express for ASP.Net project - one at each time) somehow before performing my tests? I cant test the 3rd project without having one of the first two active and in result I cant check the integrity of my project. Not even parts of it.
So, do you have any suggestion? Am I wrong about something here? Maybe I just don't understand something.
Using Visual Studio 2013 Update 3
Usually for unit testing you don't connect live systems together with your tests. That would be called integration testing instead. The line I usually use with unit testing is that it needs to a) always be fast and b) be able to be run without network connectivity.
If you want to do unit testing, the easiest way is to make interfaces around your dependent systems. Don't use these names, but something like IAspNetProject and IPhpProject. Code to those interfaces and then replace their implementation with fake data for unit testing.
If you want to do integration testing, then you can use something like http://nancyfx.org/ to create a self hosted web project. There are tons of other options for starting a lightweight web app locally to do testing against.
While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).
I just upgraded to XCode 4 and I was wondering if I need to 'include unit tests' when setting up an application? Also, what does that mean exactly?
You do not need to include unit tests.
What does "unit testing" mean? (from the unit-testing FAQ)
Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers.
Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.Wikipedia
Unit testing is closely related to Test Driven Development.
#ToddH points out:
It's easier to include [unit tests] when you setup the project. If you do it later there are quite a few steps involved in doing it correctly: http://twobitlabs.com/2011/06/...
Thanks for the protip, Todd!