I have a SonarQube 5.6 installation, using C/C++ plugin 3.12 to analyse our project. I have generated coverage results (gcov), but so far only for one of the application C files. The coverage is at 98.3%.
When analysing the whole project that application coverage results gets 'imported' and I can trace them on the web interface.
On the top-level Code page the folder containing that file shows then 98.3%, which in my view is not correct, since for all the other C files no coverage is yet available. I've tried to show that in the following series of snapshots:
(1) Top-level Code Tree:
(2) Going down the 'Implementation' tree:
(3) Going down the 'Implementation/ComponentTemplate' tree:
(4) Going down the 'Implementation/ComponentTemplate/code' tree:
EXMPL.c has only (4):113 Lines of Code. Compared to the total Lines of Code of 'Implementation' (4):61k, this is somewhere of about 0.2% only.
The coverage for EXMPL.c of 98.3% in (1) is then wrong !
My project consists of several applications, EXMPL is one - the smallest one - of all my applications within the project. So I have to produce separate coverage results for each application and to 'import' them seperately into sonar. Coverage result files are therefore located in different folders.
Maybe that project structure or the 'incomplete import' of coverage results is the cause of the 'wrong' coverage measures, but so far, I have not found any useful information on how sonar is handling provided gcov coverage measures.
Any help or information will be appriciated.
Thanks
Your second guess is right: the incomplete import of coverage results is what's skewing the numbers. Lines that aren't included in the coverage report aren't included in the coverage calculations. Since the current coverage report includes only one file that's 93% covered, all the numbers are based on that.
Related
I have a build where the Checkmarx scan is taking more than four hours to scan the full source code. Is there any way to split the source code into three or four packages and scan separately. So that we can scan them parallelly and run the scans faster. If you know please specify how we can split the source code to different packets to sent to Scan.
Currently, Checkmarx does not support linking results between source codes. If your code contains some stand-alone components like micro-srvices, you can split your source code to various Checkmarx scans.
But if you splitted your code to separated scans, and there is a "flow", value in the code that passed between the splitted source code, and it expose a volnurability, Checkmarks won't recognize it.
SonarQube Metrics graphs are not being displayed on my project dashboards behind the total numbers and Quality Gate ratings.
Current versions of SonarQube and plugins and MySQL 5.7. I am creating a new SQ project through its Administration->Projects->Management->Create Project and then performing analyses as follows (anything capitalized is either a variable or anonymized): *
MSBuild.SonarQube.Runner.exe begin /k:KEY /v:VERSION /d:sonar.host.url=http://localhost:9000/ /d:sonar.login=TOKEN /d:sonar.projectDate=YYYY-MM-DDTHH:MM:SS+0000
MSBuild.exe /maxcpucount /nr:false /nologo /target:rebuild /verbosity:quiet PROJECT\PROJECT.sln
MSBuild.SonarQube.Runner.exe end /d:sonar.login=TOKEN
I have tried VERSION equal to a constant value "1.0" and VERSION equal to a string of the UNIX time (seconds since 1/1/1970) of each git commit I analyze. I've also tried configuring project leak periods of the last 90 days and also previous_analysis, though I think that would only affect the graphs in the right column. If someone could tell me what I am doing incorrectly, I would appreciate it.
* These are examples of the commands executed by a Python script that is iterating over a list of git commit hashes and their associated timestamps, in increasing order, to populate the project history. The python script in turn is mimicking a Jenkins job that will eventually take over calling SonarQube.
Background Tasks page:
Your project homepage screenshot shows the graph in the leak period, but not extending left into the Overall section.
This is going to be a question of your analysis date and your definition of "leak period". If your leak period is set to previous_version, then you need to take a look at the sonar.version values in your analyses. So far, it looks like all your analyses are leak period analyses, which is why nothing has filtered left into the overall view.
My project has over 150k lines of code according to Coverity Scan, while Cloc reports 30k (which is a lot more reasonable).
I am trying to figure out where those LOCs come from, but failing. How do I get coverity scan to report the actual lines of code? Or report where they come from.
By default the LOC count includes the system headers pulled in by your application. You may be able to configure component maps to filter these out if it matters enough to you.
I am using simplecov for code coverage. I have no idea what sequence coverage is. I Googled it but I could not find anything, although I did find information about Branch Coverage.
Here is what I see in Shippable CI:
The term "Sequence coverage" comes from Shippable CI, not simplecov.
From Shippable's API documentation we can find this:
branchCoveragePercent The percentage of branches (if/then/else condtions) that are covered by tests
sequenceCoveragePercent Percentage of lines there are code coverage for
So branch coverage counts all your code branching such as:
if a==b
do stuff # branch 1
else
do other stuff # branch 2
end
Now if your test suite only tests when a==b, your branch coverage for this file is 50%.
Sequence coverage is the regular line by line coverage report, if your code has 100 lines and during the tests only 70% of the lines have been run, your sequence coverage is 70%.
Evidently "Sequence Coverage" is a Shippable CI term. According to Shippable CI's docs, "sequence coverage" just means line coverage. Perhaps they chose that term to contrast to "branch coverage".
Am using emma for recording code coverage. Am particularly interested in the line coverage (or line %) We are planning to increase the line coverage for our source code thru' automation. We first execute the scenarios manually and then check using emma if there is an increase in line%. If there is, we go ahead and automate that feature. Am stuck with a particular IF-ELSE block where i see the desired result when i manually run the scenario. But emma is not recording the line as covered. Here's the sample code below
if (a == null)
{
final class1 c1 = new class1();
if (c1.isSE())
{
c1.sendRedirect(req, res, "error.html");
}
else
{
c1.sendRedirect(req, res, "testpage.html");
}
return;
}
First 3 lines are green in emma report. But, the following lines below are in red in the emma report (meaning they are not covered)
c1.sendRedirect(req, res, "error.html");
c1.sendRedirect(req, res, "testpage.html");
return;
But when i execute the scenario manually, am seeing the desired result (i.e. am redirected to testpage.html page) Why is emma not recording this line as covered?
Note: I have tried the following troubleshooting below (mentioned in http://emma.sourceforge.net/faq.html )
3.18. EMMA started reporting that it instrumented 0 classes even though I gave it some input...
You might be getting tripped up by the incremental nature of EMMA instrumentation. When debugging an EMMA-enabled build, try either a
clean recompile and/or delete all instrumentation output directories
and all .em/.ec files by hand to reset EMMA to a clean state.
May be useful for the future people who refer to this...
When you instrument the Jars.you can see emma listing some of the classes with "Class Compiled without Debug Mode". If you see these messages when instrumenting then the Line % Coverage will not be generated. To overcome this you either need to compile those classes in debug mode or consider excluding if those classes are not required. Usually the classes with the above mentioned message will be third party classes.
If you don't see the Message "Class Compiled without Debug Mode" while instrumenting - then you should see the Line coverage in your report.