Unaccounted lines of code in coverity scan - static-analysis

My project has over 150k lines of code according to Coverity Scan, while Cloc reports 30k (which is a lot more reasonable).
I am trying to figure out where those LOCs come from, but failing. How do I get coverity scan to report the actual lines of code? Or report where they come from.

By default the LOC count includes the system headers pulled in by your application. You may be able to configure component maps to filter these out if it matters enough to you.

Related

How can I simply compare two timeseries of two differents perfomance tests?

I'm looking for a simple tool to be able to compare two timeseries from two different tests.
For exemple, I'm recording the access time to a database when I delete 1000000 rows, one after another. Now I get two csv files:
the first one with every tags and information about the test (database version, exe name, run params, etc)
startTime,clientComputerName,pid,testId
2022-09-29T09:20:16.453Z,COMPUTER-22,4608,A-1664443216
the second one with every value of my timeserie in micro seconds
startTime,duration
5,140
146,145
291,146
438,21
460,21
482,21
504,24
529,25
555,22
578,21
600,24
624,21
646,21
668,23
692,21
I get several test info line and timeseries files.
With this I would like to load one or several selected tests (based on tags in test info file) and plot them on top of one another to compare them.
Here's an exemple of the desired result:
Desired result
Now here's my problem,
I tried using plotly, but for this amount of point it gets two slow and I lose interactibility. I cannot down-sample to be able to investigate anomalies.
Kibana and Grafana are not options either since its datetime based (unless I'm missing something here).
I'd like to find something as simple as possible (tools like PowerBI might too complicated for this usage)
Do you know what I could use ?
Thanks.

Splitting the sourcecode before passing to Quality gates

I have a build where the Checkmarx scan is taking more than four hours to scan the full source code. Is there any way to split the source code into three or four packages and scan separately. So that we can scan them parallelly and run the scans faster. If you know please specify how we can split the source code to different packets to sent to Scan.
Currently, Checkmarx does not support linking results between source codes. If your code contains some stand-alone components like micro-srvices, you can split your source code to various Checkmarx scans.
But if you splitted your code to separated scans, and there is a "flow", value in the code that passed between the splitted source code, and it expose a volnurability, Checkmarks won't recognize it.

Sonarqube can duplicate file has more than 1 on detail?

I am using sonarqube 6.7.
On Sonarqube, Measure, Duplication, there is Duplicated Files. If I click on Duplicated Files, detail will be displayed as seen on the picture.
I want to know can the number be more than 1 on each file? (as seen on picture, the number I have circled)
Thanks a lot
duplicatedFiles

Understanding SonarQube C code coverage measures

I have a SonarQube 5.6 installation, using C/C++ plugin 3.12 to analyse our project. I have generated coverage results (gcov), but so far only for one of the application C files. The coverage is at 98.3%.
When analysing the whole project that application coverage results gets 'imported' and I can trace them on the web interface.
On the top-level Code page the folder containing that file shows then 98.3%, which in my view is not correct, since for all the other C files no coverage is yet available. I've tried to show that in the following series of snapshots:
(1) Top-level Code Tree:
(2) Going down the 'Implementation' tree:
(3) Going down the 'Implementation/ComponentTemplate' tree:
(4) Going down the 'Implementation/ComponentTemplate/code' tree:
EXMPL.c has only (4):113 Lines of Code. Compared to the total Lines of Code of 'Implementation' (4):61k, this is somewhere of about 0.2% only.
The coverage for EXMPL.c of 98.3% in (1) is then wrong !
My project consists of several applications, EXMPL is one - the smallest one - of all my applications within the project. So I have to produce separate coverage results for each application and to 'import' them seperately into sonar. Coverage result files are therefore located in different folders.
Maybe that project structure or the 'incomplete import' of coverage results is the cause of the 'wrong' coverage measures, but so far, I have not found any useful information on how sonar is handling provided gcov coverage measures.
Any help or information will be appriciated.
Thanks
Your second guess is right: the incomplete import of coverage results is what's skewing the numbers. Lines that aren't included in the coverage report aren't included in the coverage calculations. Since the current coverage report includes only one file that's 93% covered, all the numbers are based on that.

JMeter - saving results + configuring "graph results" time span

I am using JMeter and have 2 questions (I have read the FAQ + Wiki etc):
I use the Graph Results listener. It seems to have a fixed span, e.g. 2 hours (just guessing - this is not indicated anywhere AFAIK), after which it wraps around and starts drawing on same canvas from the left again. Hence after a long weekend run it only shows the results of last 2 hours. Can I configure that span or other properties (beyond the check boxes I see on the Graph Results listener itself)?
Can I save the results of a run and later open them? I know I can save the test plan or parts of it. I am unclear if I can save separately just the test results data, and later open them and perform comparisons etc. And furthermore can I open them with different listeners even if they weren't part of original test (i.e. I think of the test as accumulating data, and later on I want to view and interpret the data using different "viewers").
Thanks,
-- Shaul
Don't know about 1. Regarding 2: listeners typically have a configuration field for "Write All Data to a File", which lets you specify the file name. You can use the Simple Data Writer to store results efficiently for later analysis.
You can load results from a previous test into a visualizer by choosing "Write All Data to a File" and browsing for the file you wish to load. Somewhat counterintuitively, selecting a file for writing also loads that file into the visualizer and displays the results. Just make sure you don't run the test again while that file is selected, otherwise you will lose your saved test data. :-)
Well, I later found a JMeter group that was discussing the issue raised in my first question, and B.Ramann gave me an excellent suggestion to use instead a better graph found here.
-- Shaul

Resources