XCode 4 - 'Include Unit Tests' - xcode

I just upgraded to XCode 4 and I was wondering if I need to 'include unit tests' when setting up an application? Also, what does that mean exactly?

You do not need to include unit tests.
What does "unit testing" mean? (from the unit-testing FAQ)
Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers.
Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.Wikipedia
Unit testing is closely related to Test Driven Development.
#ToddH points out:
It's easier to include [unit tests] when you setup the project. If you do it later there are quite a few steps involved in doing it correctly: http://twobitlabs.com/2011/06/...
Thanks for the protip, Todd!

Related

Why smoke tests are useful with Continuous Integration?

We usually do smoke tests to check critical functionalities whenever we receive a new build. After executing the smoke tests, we are sure to go to next stage (next level of testing). I heard from my colleagues that smoke tests are really useful when your team employs Continuous Integration and DevOps. Smoke tests are always beneficial, but how it will be more beneficial with the combination of CI and DevOps?
Testing is interesting and every time a new challenge for QA which requires higher level of efforts in the final deployment of product. This consist of continuous delivery in continuous integration environment. In this continuous deployment process, requires testing to be followed in parallel in order to keep the process moving.
I've usually heard smoke testing used to refer to manual testing that you run to sanity-check builds. This article defines smoke testing as follows:
Smoke Testing, also known as “Build Verification Testing”, is a type
of software testing that comprises of a non-exhaustive set of tests
that aim at ensuring that the most important functions work. The
results of this testing is used to decide if a build is stable enough
to proceed with further testing.
First, I would certainly hope that people are doing this whenever they check code into the main branch to ensure that their changes didn't break the software in some obvious way. That holds whether you're doing continuous integration or not. (One of my personal pet peeves has always been people who check in code and then leave for the day without checking to make sure that it worked).
Also, keep in mind that in a typical CI cycle nowadays a build will often occur with every checkin to the main branch (or, at a minimum, there will be a nightly automated build; at my current company we have both), so you don't really have time to manually run your entire test suite for every build. One of the main purposes of CI is to have integration (and, as an extension, builds) occur much more frequently than is typical in other kinds of development cycles.
As one final comment: if you're doing continuous integration, I'd strongly encourage you to have some kind of automated testing (e.g. coded UI tests, unit tests, etc.) as part of that. Those can provide basic smoke/sanity testing and regression testing and reduce the burden of having to do all of it manually for every build.

Sonarqube with smalll medium large test org

I am trying to apply sonarqube analysis to our system. The system is roughly laid out in multiple modules. We have some small and medium tests in each module - and hope to create large tests in the future. We are trying to use "Google" test naming.
Sonarqube seems to refer to unit and integration (roughly equivalent to small and medium in our environment). I'm wondering if anyone knows a simple way to modify the labeling to better match what we are trying to setup.
This is not possible to change the labels in SonarQube. Unit and integration tests are two very common types of tests, IMO you should stick to this convention.
Just to share some information: at SonarSource, we have unit tests, medium tests and integration tests, and when we analyse our code on our internal SonarQube instance, the medium tests end up in the "unit test" category (they are executed at the same time BTW).
I was able to switch the labels Unit -> Small and Integration -> Medium by creating a language pack plugin. I started from the French language pack, and modified the existing core.properties file. The solution gives me a "localized" version of site using our naming convention.

What does xUnit compatibility entail?

I was researching unit testing frameworks and the Wikipedia list has a column which lists whether a framework is considered the "xUnit type" or "compatible". Mocha was listed as not being of the "xUnit type" – why? What are the core features of the xUnit family?
XUnit frameworks share the following concepts:
Test runner - the program that runs the tests
Test case - Where all tests inherit from
Test fixture - the state needed to run the tests
Test suites - tests that share the same fixture
Assertion - the function that verifies the state of the test
Test formatter - shows the results un one or more formats. This is a bit tricky, since the formats are not always the same. For example, Inria's page specifies the xml tags as test-case and test-suite. JUnit, on the other hand, uses testcase and testsuite.
You can think of XUnit frameworks as *Unit... where the * is replaced by the language (e.g., JUnit for Java).
What's very tricky is that XUnit.net is different from XUnit. XUnit.net is a framework on itself that also incorporates these aforementioned concepts. The XML output format uses different tags though... such as assembly, class, etc. It can get very confusing when googling for issues.

Can I use Unit Testing tools for Integration Testing?

I'm preparing to create my first Unit Test, or at least that's how I was thinking of it. After reading up on unit testing this weekend I suspect I'm actually wanting to do Integration Testing. I have a black box component from a 3rd party vendor (e.g. a digital scale API) and I want to create tests to test it's usage in my application. My goal is to determine if a newly released version of said component is working correctly when integrated into my application.
The use of this component is buried deep in my application's code and the methods that utilize it would be very difficult to unit test without extensive refactoring which I can't do at this time. I plan to, eventually.
Considering this fact I was planning to write custom Unit Tests (i.e. no derived from one of my classes methods or properties) to put this 3rd party component through the same operations that my application will require from it. I do suspect that I'm circumventing a significant benefit of Unit Testing by doing it this way, but as I said earlier I can't stop and refactor this particular part of my application at this time.
I'm left wondering if I can still write Unit Tests (using Visual Studio) to test this component or is that going against best practices? From my reading it seems that the Unit Testing tools in Visual Studio are very much designed to do just that - unit test methods and properties of a component.
I'm going in circles in my head, I can't determine if what I want is a Unit Test (of the 3rd party component) or an Integration Test? I'm drawn to Unit Tests because it's a managed system to execute tests, but I don't know if they are appropriate for what I'm trying to do.
Your plan of putting tests around the 3rd party component, to prove that it does what you think it does (what the rest of your system needs it to do) is a good idea. This way when you upgrade the component you can tell quickly if it has changed in ways that mean your system will need to change. This would be an Integration Contract Test between that component and the rest of your system.
Going forward it would behoove you to put that 3rd party component behind an interface upon which the other components of your system depend. Then those other parts can be tested in isolation from the 3rd party component.
I'd refer to Micheal Feathers' Working Effectively with Legacy Code for information on ways to go about adding unit tests to code which is not factored well for unit tests.
Testing the 3rd party component the way you are doing it is certainly not against best practices.
Such a test would, however, be classified as a (sub-)system test, since a) the 3rd party component is tested as an isolated (sub-)system, and, b) your testing goal is to validate the behaviour on API level rather than on testing the lower level implementation aspects.
The test would definitely not be classified as an integration test, because you are simply not testing the component together with your code. That is, you will for example not find out if your component uses the 3rd party component in a way that violates the expectations of the 3rd party component.
That said, I would like to make two points:
The fact that a test is not a unit-test does not make it less valuable. I have encountered situations where I told people that their tests were not unit-tests, and they got angry at me because they thought I wanted to tell them that their tests did not make sense - an unfortunate misunderstanding.
To what category a test belongs is not defined by technicalities like which testing framework you are using. It is rather defined by the goals you want to achieve with the test, for example, which types of errors you want to find.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

Resources