Total coverage won't increase even after exclusions - sonarqube

I have noticed the total coverage doesn't increase even after exclude un covered lines.
What I have noticed after exclusions was, after exclusions the number of Lines to cover and Uncovered Lines both got reduced. So looks like that impact the total coverage remain same.
Further after exclusions those files DO NOT appear under files as expected which confirms exclusions working properly.
Can someone please explain what's the theory behind and how exclusions can increase the total coverage

Related

Random/Inconsistent Code Run Times - Parallel HPC

I've been running some tests on an HPC. I have a code and if it's executed in serial, the run times are completely consistent. This wasn't always the case, but I included commands in my batch files so that it reserves an entire node and all its memory. Doing this allowed for almost perfectly consistent code execution times.
However, now that I am doing small scale parallel tests, the code execution times seem random. I would expect there to be some variation now that parallelization has been introduced, but the scale of randomness seems quite bizarre.
No other jobs are performed on the node so it should be fine - when in serial it is very consistent, so it must be something to do with the parallelization.
Does anyone know what could cause this? I've included a graph showing the execution times - there is a pretty clear average, but also major outliers. All results produced are identical and correct.
I'm under an NDA so cannot include much info about my code. Please feel free to ask questions and I'll see if I can help. Apologies if I'm not allowed to answer!
I'm using Fortran 90 as the main code language, and the HPC uses Slurm. NTASKS = 8 for these tests, however the randomness is there if NTASKS > 1. Number of tasks and randomness don't seem particularly linked, except if it is in parallel then the randomness occurs. I'm using Intel's autoparallelization feature, rather than OpenMP/MPI.
Thanks in advance
SOLVED!!!! Thanks for your help everyone!
I did small scale tests 100 times to get to the root of the problem.
As the execution times were rather small, I noticed that the larger outliers (longer run times) often occurred when a lot of new jobs from other users were submitted to the HPC. This made a lot of sense and wasn't particularly surprising.
The main reason these results really confused me was because of the smaller outliers (much quicker run times). It made sense that sometimes it would take longer to run if it was busy, but I just couldn't figure out how sometimes it ran much quicker, but still giving the same results!
Probably a bit of a rookie error, but it turns out not all nodes are equal on our HPC! About 80% of the nodes are identical, giving roughly the same run times (or longer if busy). BUT, the newest 20% (i.e. the highest node numbers with Node0XX) must be higher performance.
I checked the 'sacct' Slurm data and job run times, and can confirm all faster execution time outliers occurred on these newer nodes.
Very odd situation and something I hadn't been made aware of. Hopefully this might be able to help someone out if they're in a similar situation. I spent so long checking source codes/batchfiles/code timings that I hadn't even considered the HPC hardware itself. Something to keep in mind.
I did much longer (about an hour) tests and the longer execution time outliers didn't really exist (because the small queuing penalty was now relatively small in comparison to total execution time). But, the much quicker execution time outliers still occurred. Again, I checked the account data and these outliers always occurred on the high node numbers.
Hopefully this can help at least one person with a similar headache.
Cheers!

In Continuum, how are the risky and high risk files calculated and included in "risk" metrics?

Continuum has the notion of being able to show Risk for a package manifest and as a high roll up metric for a version of a package that is in flight.
How are the risky files calculated, how are the high risk files calculated, and how are the risk dashboard metrics calculated?
The continuum product now displays a helpful dialog that shows how the risk information is calculated and used.

Sonarqube Coverage Randomly Drops with Same Commit

We have a Jenkins pipeline that (as one of its steps) runs Sonarqube. Right now it insists that coverage must be 90% or higher, which is what we want to maintain. Unfortunately, sometimes the coverage it "perceives" will randomly drop. For example, if our coverage has been around 95%, it will report coverage of 70-80%.
However, re-running the exact same commit will then yield the correct coverage of 95% or whatever. We've seen this problem a few times. In the specific example I just encountered, we went from 74% for the first time the commit was checked to 94% for the 2nd time.
Expected: The same commit should consistently produce the same level of coverage in Sonarqube.
Actual: Sometimes coverage will vary substantially (20% or even a bit more) with the same commit.

How to profile executions of Saxon

In the spirit of Dimitre Novatchev's answer at XSLT Performance, I want to create a profile that shows where the time consumed by my XSL transform has gone. Using the Saxon -TP:profile.html option, we created an “Analysis of Stylesheet Execution Time” HTML document.
At the top of that document, we see:
Total time: 1102316.688 milliseconds
This figure (1,102 seconds) corresponds with my measured program execution time.
However, the sum of the “total time (net)” column values is less than 2% of this total. I assume, per http://www.saxonica.com/html/documentation/using-xsl/performanceanalysis.html, that the “total time (net)” column values are reported in milliseconds.
I would normally work a profile from the top down, but in this case, I don’t want to invest effort into optimizing a template that is reported to have contributed less than 0.5% of my total response time.
How can I find out where my time has really gone? Specifically, how can I learn where the unreported 98% of my program's time has been consumed?

Speed decrease on volatile neo4j dataset

I'm using neo4j in one of my projects and have noticed that my local database (used to run test suites) becomes slower and slower over the course of time. This is a low-priority issue, as it currently does not seem to occur during real-world use (outside of running huge test suites), but for the goal of improving neo4j I figured it be best to post it nonetheless :)
As it currently stands, these are my findings:
the speed decrease is linked to the amount of tests executed (and therefore, the amount of created/deleted nodes)
the db size increases, even though each test suite clears the database* after use (indicating dead nodes remain)
deleting the graph.db file solves the issue (further proof for the dead nodes theory)
Although the problem can be easily solved in an acceptable way for a test database, I'm still worried about the production implications of this symptom for long running databases with volatile data. Granted, having a database with data as volatile as the test data is a border case, however it shouldn't be a problem at all. At minimum a solution which is production ready (I'm thinking dead node pruning) should be available, however I can find nothing of the sorts in the documentation.
Is this a known issue? I couldn't find any reference to similar issues. Any help in locating the exact cause would be greatly appreciated, as I'd like to contribute a patch if I can find (and solve) the actual problem.
*) the database is cleared using two separate cypher commands (to prevent occasional occurrences of issue 27) the following cyphers are run in order: MATCH ()-[r]-() DELETE r MATCH (n) DELETE n
I've experienced the same behavior as well. We were running a heavy calculation script every 15 minutes on the entire database. That produced huge (logical) log files that seemed to decrease the performance. In order to reduce the log files, you need to set the keep_logical_logs property. For tests, the following might be a good setting:
keep_logical_logs=24 hours
For tests, you'd also want to consider the ImpermanentantGraphDatabase if an embedded database an option. You can get it with
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-kernel</artifactId>
<version>2.0.1</version>
<classifier>tests</classifier>
</dependency>

Resources