Is there a way to find coverage of UI test cases? - xcode

I know how to check coverage of Unit test cases, we can see coverage for each .swift file in Xcode coverage report. but what about UI test cases?
As per my understanding, In unit test cases the subject being tested is a .swift file. if a file has class with 4 methods/functions. The unit test coverage of that file would be 100% only if all the 4 methods are being called from unit test cases.
In UI test cases the subject is the View, Does interacting with all the UI elements leads to 100% coverage? How does coverage report for UI test work?
Edit:
In Unit tests - I know that when few lines of function are not covered I see red overlay, here I know that I have to write unit tests for screenshot class method, line 56 in above attached image. Is there any similar mechanism in UI tests?
In UI tests - How can we find which UI element is left uncovered?

The code coverage report can be generated for both unit and UI tests in Xcode. In your testing scheme choose Gather coverage for required targets. You can only get coverage for targets in your workspace.
The way coverage report is collected for UI tests is the same as for unit tests.
Even if you interact with all UI elements in your app, some code might be uncovered by tests. If you aim to increase your coverage, add additional tests to execute previously uncovered code.

Related

How can I find check the coverage of changed code?

I have a Jenkins job that runs multiple jobs, some of those are unit tests for a different part of our platform.
One of those jobs is phpunitTest which basically makes sure that all tests are passing and generates a code-coverage using Codecept.
My question now is, how can I make sure new code pushed is covered by the unit tests?
Currently I'm using this command to run the coverage:
codeception/codeception run unit --coverage-html --quiet
I expect to see have a failed test if the code pushed isn't unit tested.
Unless Codecept has special (and unusual) tooling for this there's basically two ways: achieve 100% coverage and verify that at every run or force a move towards 100% coverage. Since most projects don't even go for 100% coverage (which is not at all the same as having covered all your bases; see for example SQLite for why 100% is just the beginning) I'll assume the latter. What you can do in that situation is to
enforce that the coverage percentage minimum is met at every CI run and
enforce that the coverage percentage is never lowered.
By these simple expedients you'll naturally ensure that code coverage goes up with every piece of code added.
This does not guarantee that each new piece of code is 100% covered; for that you would have to parse the coverage checker results and see if any new or changed files are mentioned as missing coverage.

How to show Integration Test Statistics?

Im using SonarQube v6.4. I'm aware that all types of tests (Unit Tests, Integration Tests etc) have been merged together as overall coverage.
However, on the interface I can see statistics for only Unit tests, Is there a way to get the statistics for other types of tests?
Example of statistics available only for unit test
Unit Test Errors
Unit Test Failures
Skipped Unit Tests
Unit Test Success (%)
Unit Test Duration
SonarQube no longer distinguishes between different types of tests. Integration tests, Smoke Tests, Medium Tests, Regression Tests, etc. - all are now called "Unit Tests". This new naming is indeed misleading...
For to see the values navigate to your project, click on the "Measures" tab ("All" page) and scroll down to "Coverage". There you will find the current tests measure values.
Starting from version 6.6 of SonarQube you will be able to show graphs for any metric (see SonarQube's own SonarQube instance with 6.6-SNAPSHOT installed).
Navigate to any SonarQube project, click on the tab "Activity", select "Custom" from the drop down and click on "Add metric". There you can choose "Unit Test Errors", "Skipped Unit Tests", etc.
I discovered that this feature is not supported in Sonarqube. A ticket opened for this issue has already been closed as "Won't Fix" by the Sonarqube Team.
For a workaround you can check this

Coverage metrics do not consider code without tests?

In a project where some components have test coverage and others don't it seems that SonarQube only calculates the total code coverage based on the components that have coverage. I would expect that the lines of code in components without test coverage are classified (at least for calculations) as having 0% code coverage.
Example:
Project X
Module 1: 100% coverage
Module 2: N/A coverage (I reason that this is equivalent to 0% in a computation!)
SonarQube coverage (Project X): 100%
How is the total coverage calculated? If this is by design, why?
Coverage is calculated based on the lines covered during test execution. If you have no test, test execution is skipped, no code is executed at all and therefore no coverage data is produced. So there is a difference between N/A and 0%
If you add an empty test to the module, it will be used for calculation and produce 0% coverage as result.
EDIT:
Assuming you are using Jacoco for coverage. Jacoco write the coverage information into a file (i.e. jacoco.exec). For unit tests (maven surefire) this file is written to the target directory of the module and coverage is calculated for that module using that file. No file, no coverage.
When I run integration tests, I want to determine what parts of the code of the entire project is covered by a tests. My integration tests are typically located in separate module as well. So per-module coverage doesn't make much sense in that case, because test (1 module) and product (all other modules) are sparated.
So I run the integration tests and write the coverage information into a single jacoco-it.exec file into the target folder of the root module (project). As a result, the coverage for the entire code base is calculated, including modules with 0% coverage (See here for how to set it up: Multi-Module Integration Test Coverage with Jacoco and Sonar)
What you loose with this approach is, what parts of code of a single module are covered by tests of that single module. Because parts of a module can be covered by tests of another module, i.e. coverage of a module can be 30% without having any test in that module.
So you have the choice:
detailed coverage per module, with test-less modules having N/A coverage
total coverage for entire project, with test-less modules having 0% coverage but without exact coverage information by module
both: using both approaches for unit test and integration tests separately

VS 2010 - Code Coverage Results includes the test project itself

I am writing some unit tests for one of my DLL libraries.
The 'Code Coverage Results' pane shows a breakdown of the assemblies covered and tested.
For some strange reason - my test project itself appears in the coverage results ! (at approx. 90% covered).
This seems stupid... what's the deal with this ?
The reason the percentage is so high is that projects for code coverage are instrumented to keep track of which lines are hit by a test run, since you are running the tests from this project, almost all lines of code in the project will be run.
You can choose which projects/DLLs to collect Coverage statistics on in the Test Settings.
So if you don't need to capture stats on the test project (which you shouldn't really), you can simply remove this project from the settings you're using for coverage.
See http://msdn.microsoft.com/en-us/library/ms182534.aspx (steps 5 - 7 in particular) for more details.

XCode 4 - 'Include Unit Tests'

I just upgraded to XCode 4 and I was wondering if I need to 'include unit tests' when setting up an application? Also, what does that mean exactly?
You do not need to include unit tests.
What does "unit testing" mean? (from the unit-testing FAQ)
Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers.
Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.Wikipedia
Unit testing is closely related to Test Driven Development.
#ToddH points out:
It's easier to include [unit tests] when you setup the project. If you do it later there are quite a few steps involved in doing it correctly: http://twobitlabs.com/2011/06/...
Thanks for the protip, Todd!

Resources