I'm using Visual Studio 2010's code coverage feature. I've made several unit tests to test a method, but the code coverage is telling me that three blocks are not getting completely covered. The problem is, I don't see how these blocks can be only partially covered. Notice that the return statements ARE covered, so clearly the branch has been taken. Any ideas?
The answer turned out to be that endDate is nullable. Even though I handle null cases at the top, the code coverage wanted to see the null situation dealt with at each branch with endDate in it.
Related
i am a Beginner with SonarQube and really tried to google and read a lot of community pages to understand which functions SonarQube offers.
What i dont get is: What does the test coverage in SonarQube refer to?
If it says for example that the coverage on New Code is 30% what does new code mean?
And when does SonarQube say that a issue is a bug? Is the analyzed code compared to a certain standard in order for SonarQube to say that there is a bug?
I hope someone with more knowledge about SonarQube can help me understand it. Thank you very much
Test coverage (also known as code coverage) corresponds to the proportion of the application code (i.e., code without test and sample code) that is executed by test cases out of all application code of the code base.
SonarQube does not compute code coverage itself. Instead coverage is computed and uploaded by external code coverage tools (e.g., cobertura, JaCoCo). SonarQube presents code coverage at different levels (e.g., line coverage, condition coverage); see https://docs.sonarqube.org/latest/user-guide/metric-definitions/#header-9.
Coverage on new code refers to the proportion of code that is both covered and added (or modified) since a certain baseline out of all added and changed code since the same baseline. The baseline can, for example, be the previously analyzed code state or the code state of the previous commit. That is, this metric expresses how extensively changes have been tested. Note that 100% coverage does not mean that code has been perfectly tested; it just says that all code has been executed by test cases.
Issues in SonarQube do not necessarily represent bugs. Usually most issues are actually not bugs but are problems affecting code maintainability in the long term (e.g., code duplications) or violations of best practices. Still, some issues can represent bugs (e.g., potential null-dereferences, incorrect concurrency handling).
Note that an issue can also be false a positive and therefore not be a problem at all.
Most issues are identified with static code analysis by searching the code structure for certain patterns. Some can be uncovered by simple code searches (e.g., violation of naming conventions). Other analyses / issue classes may additionally need data-flow analyses (null-dereferences) or require byte-code information.
I want to use SonarQube on my project. The project is quite a big and scanning whole files take much time. Is it possible to scan only changed files in the last commit, and provide report based only on changed lines of code?
I want to check if added or modified lines make the project quality worst and I don't care about old code.
For example, if person A created a file with 9 bugs and then commited changes - the report and quality gate should show 9 bugs. Then person B edited the same file adding few lines containing 2 additional bugs, then commited changes - the report should show the 2 last bugs and quality gate should be executed on the last changes (so should consider the last 2 bugs)
I was able to narrow scan to only changed files in the last commit- but report is generated based on whole files. I had an idea about cutting only changed lines of code, paste them to new file and run sonar scan on the file - but I'm almost sure the SonarQube needs the whole context of file.
Is it possible to somehow achieve my usecase ?
No, it is impossible. I saw a lot of similar questions. These are answers to two of them:
New Code analysis only:
G Ann Campbell:
Analysis will always include all code. Why? Why take the time to
analyze all of it when only a file or two has been changed? Because
any given change can have far-reaching effects. I’ll give you two
examples:
I check in a change that deprecates a much-used method. Suddenly,
issues about the use of deprecated code should be raised all over the
project, but because I only analyzed that one file, no new issues were
raised.
I modify a much-used method to return null in some cases. Suddenly all
the methods that dereference the returned value without first
null-checking it are at risk of NullPointerExceptions. But only the
one file that I changed was analyzed, so none of those “Possible NPE”
issues are raised. Worse, they won’t be raised until after each
individual file happens to be touched.
And that’s why all files are included in each analysis.
I want sonar analysis on newly checkin code:
G Ann Campbell:
First, the SonarQube interface and default Quality Gate are designed to help you focus
on the New Code Period. You can’t keep analysis from picking up those
old issues, but you can decide to only pay attention to issues raised
on newly-changed code. That means you would essentially ignore the
issues on the left side of the project homepage with a white
background and focus instead on the New Code values over the yellow
background on the right. We call this Fixing the Leak, or
alternately Clean as You Code.
Second, if you have a commercial edition, then branch and PR analysis
are available to you. With Short-Lived Branch (SLB) and PR analysis
still covers all files, but all that’s reported in the UI is what’s
changed in the PR / SLB.
Ideally, you’ll combine both of these things to make sure your new
code stays clean.
The position in this matter has not changed over the last years, so don't expect it will be changed.
I use sonarqube to do the line coverage analysis, but the reporting results are fallacious.
For example for the if method below:
if(a != null ){
system.out.print("Hello");
}
the if condition is reported as NOT covered by unit test,which means not executed .
BUT,the logical system.out.print("Hello") inside is reported as covered by unit test. that is illogical, right?
This is not really a question of SonarQube but of your coverage engine. SonarQube only relays what your coverage engine reported.
That said, you're likely misinterpreting the markers in the SonarQube interface, although without a screenshot it's hard to know for certain. If you're seeing a diagonally striped marker next to the if, then SonarQube is telling you that the line is partially covered. That is, there are multiple paths through the code and only some of them are taken in your testing. Specifically, it sounds like you are testing the path where the condition is true. I would guess you're not testing the path where the condition is false.
That's basically the idea. I own a project and I want to break any new build on TeamCity based on a code coverage percentage. As simply as: this percentage can never go down. This way I ensure that new commits are covered.
TeamCity provides this out of the box. Simply go to the configuration for the project, and click 'Failure Conditions'. This gives you a place whwre you can add a failure condition on a metric change. One of the available metric changes is 'Percentage of line coverage'. You can set it so that the build fails if this is less the 0 difference from the last build.
Beware adding this though, especially if you have projects where the code coverage is not 100% already, as a refactoring which reduces the number of lines in the project and all of those lines happen to be covered by tests will result on the overall coverage going down, and a failing build despite not adding any new functionality.
Converting my current code project to TDD, I've noticed something.
class Foo {
public event EventHandler Test;
public void SomeFunction() {
//snip...
Test(this, new EventArgs());
}
}
There are two dangers I can see when testing this code and relying on a code coverage tool to determine if you have enough tests.
You should be testing if the Test event gets fired. Code coverage tools alone won't tell you if you forget this.
I'll get to the other in a second.
To this end, I added an event handler to my startup function so that it looked like this:
Foo test;
int eventCount;
[Startup] public void Init() {
test = new Foo();
// snip...
eventCount = 0;
test.Test += MyHandler;
}
void MyHandler(object sender, EventArgs e) { eventCount++; }
Now I can simply check eventCount to see how many times my event was called, if it was called. Pretty neat. Only now we've let through an insidious little bug that will never be caught by any test: namely, SomeFunction() doesn't check if the event has any handlers before trying to call it. This will cause a null dereference, which will never be caught by any of our tests because they all have an event handler attached by default. But again, a code coverage tool will still report full coverage.
This is just my "real world example" at hand, but it occurs to me that plenty more of these sorts of errors can slip through, even with 100% 'coverage' of your code, this still doesn't translate to 100% tested. Should we take the coverage reported by such a tool with a grain of salt when writing tests? Are there other sorts of tools that would catch these holes?
I wouldn't say "take it with a grain of salt" (there is a lot of utility to code coverage), but to quote myself
TDD and code coverage are not a
panacea:
· Even with 100% block
coverage, there still will be errors
in the conditions that choose which
blocks to execute.
· Even with 100% block
coverage + 100% arc coverage, there
will still be errors in straight-line
code.
· Even with 100% block
coverage + 100% arc coverage + 100%
error-free-for-at-least-one-path
straight-line code, there will still
be input data that executes
paths/loops in ways that exhibit more
bugs.
(from here)
While there may be some tools that can offer improvement, I think the higher-order bit is that code coverage is only part of an overall testing strategy to ensure product quality.
<100% code coverage is bad, but it doesn't follow that 100% code coverage is good. It's a necessary but not sufficient condition, and should be treated as such.
Also note that there's a difference between code coverage and path coverage:
void bar(Foo f) {
if (f.isGreen()) accountForGreenness();
if (f.isBig()) accountForBigness();
finishBar(f);
}
If you pass a big, green Foo into that code as a test case, you get 100% code coverage. But for all you know a big, red Foo would crash the system because accountForBigness incorrectly assumes that some pointer is non-null, that is only made non-null by accountForGreenness. You didn't have 100% path coverage, because you didn't cover the path which skips the call to accountForGreenness but not the call to accountForBigness.
It's also possible to get 100% branch coverage without 100% path coverage. In the above code, one call with a big, green Foo and one with a small, red Foo gives the former but still doesn't catch the big, red bug.
Not that this example is the best OO design ever, but it's rare to see code where code coverage implies path coverage. And even if it does imply that in your code, it doesn't imply that all code or all paths in library or system are covered, that your program could possibly use. You would in principle need 100% coverage of all the possible states of your program to do that (and hence make sure that for example in no case do you call with invalid parameters leading to error-catching code in the library or system not otherwise attained), which is generally infeasible.
Should we take the coverage reported by such a tool with a grain of salt when writing tests?
Absolutely. The coverage tool only tells you what proportion of lines in your code were actually run during tests. It doesn't say anything about how thoroughly those lines were tested. Some lines of code need to be tested only once or twice, but some need to be tested over a wide range of inputs. Coverage tools can't tell the difference.
Also, a 100% test coverage as such does not mean much if the test driver just exercised the code without meaningful assertions regarding the correctness of the results.
Coverage is only really useful for identifying code that hasn't been tested at all. It doesn't tell you much about code that has been covered.
Yes, this is the primary different between "line coverage" and "path coverage". In practice, you can't really measure code path coverage. Like static compile time checks, unit tests and static analysis -- line coverage is just one more tool to use in your quest for quality code.
Testing is absolutly necessary. What must be consitent too is the implementation.
If you implement something in a way that have not been in your tests... it's there that the problem may happen.
Problem may also happen when the data you test against is not related to the data that is going to be flowing through your application.
So Yes, code coverage is necessary. But not as much as real test performed by real person.