Sonarqube Coverage Randomly Drops with Same Commit - sonarqube

We have a Jenkins pipeline that (as one of its steps) runs Sonarqube. Right now it insists that coverage must be 90% or higher, which is what we want to maintain. Unfortunately, sometimes the coverage it "perceives" will randomly drop. For example, if our coverage has been around 95%, it will report coverage of 70-80%.
However, re-running the exact same commit will then yield the correct coverage of 95% or whatever. We've seen this problem a few times. In the specific example I just encountered, we went from 74% for the first time the commit was checked to 94% for the 2nd time.
Expected: The same commit should consistently produce the same level of coverage in Sonarqube.
Actual: Sometimes coverage will vary substantially (20% or even a bit more) with the same commit.

Related

Random/Inconsistent Code Run Times - Parallel HPC

I've been running some tests on an HPC. I have a code and if it's executed in serial, the run times are completely consistent. This wasn't always the case, but I included commands in my batch files so that it reserves an entire node and all its memory. Doing this allowed for almost perfectly consistent code execution times.
However, now that I am doing small scale parallel tests, the code execution times seem random. I would expect there to be some variation now that parallelization has been introduced, but the scale of randomness seems quite bizarre.
No other jobs are performed on the node so it should be fine - when in serial it is very consistent, so it must be something to do with the parallelization.
Does anyone know what could cause this? I've included a graph showing the execution times - there is a pretty clear average, but also major outliers. All results produced are identical and correct.
I'm under an NDA so cannot include much info about my code. Please feel free to ask questions and I'll see if I can help. Apologies if I'm not allowed to answer!
I'm using Fortran 90 as the main code language, and the HPC uses Slurm. NTASKS = 8 for these tests, however the randomness is there if NTASKS > 1. Number of tasks and randomness don't seem particularly linked, except if it is in parallel then the randomness occurs. I'm using Intel's autoparallelization feature, rather than OpenMP/MPI.
Thanks in advance
SOLVED!!!! Thanks for your help everyone!
I did small scale tests 100 times to get to the root of the problem.
As the execution times were rather small, I noticed that the larger outliers (longer run times) often occurred when a lot of new jobs from other users were submitted to the HPC. This made a lot of sense and wasn't particularly surprising.
The main reason these results really confused me was because of the smaller outliers (much quicker run times). It made sense that sometimes it would take longer to run if it was busy, but I just couldn't figure out how sometimes it ran much quicker, but still giving the same results!
Probably a bit of a rookie error, but it turns out not all nodes are equal on our HPC! About 80% of the nodes are identical, giving roughly the same run times (or longer if busy). BUT, the newest 20% (i.e. the highest node numbers with Node0XX) must be higher performance.
I checked the 'sacct' Slurm data and job run times, and can confirm all faster execution time outliers occurred on these newer nodes.
Very odd situation and something I hadn't been made aware of. Hopefully this might be able to help someone out if they're in a similar situation. I spent so long checking source codes/batchfiles/code timings that I hadn't even considered the HPC hardware itself. Something to keep in mind.
I did much longer (about an hour) tests and the longer execution time outliers didn't really exist (because the small queuing penalty was now relatively small in comparison to total execution time). But, the much quicker execution time outliers still occurred. Again, I checked the account data and these outliers always occurred on the high node numbers.
Hopefully this can help at least one person with a similar headache.
Cheers!

What is gradle artifact transform task and why it takes so long?

I ran the profiler for my build, and under Artifact Transform tab, I can see different times taken by the transform tasks.
First I thought that the 1m16.09s mark was that the process finished 1m after I started the build, and that the rest of 26 minutes time was for compilation efforts. However, if I sum all of the times in the Duration column, they will sum up to exactly 27m21.96s. Thus I understand that each of the transform tasks take the time mentioned at the right.
So I am wondering, do running transforms take that much time? Why would it take 1m to transform a common jar?

Total coverage won't increase even after exclusions

I have noticed the total coverage doesn't increase even after exclude un covered lines.
What I have noticed after exclusions was, after exclusions the number of Lines to cover and Uncovered Lines both got reduced. So looks like that impact the total coverage remain same.
Further after exclusions those files DO NOT appear under files as expected which confirms exclusions working properly.
Can someone please explain what's the theory behind and how exclusions can increase the total coverage

What are the two different times output from an XCTest run?

When I run my set of unit tests (Xcode 9.2), it logs output like this:
Test Suite 'All tests' passed at 2017-12-13 14:16:27.947.
Executed 319 tests, with 0 failures (0 unexpected) in 0.372 (0.574) seconds
There are two times here, 0.372 and 0.574 seconds respectively.
Can anyone please tell me (or point me to anything that explains) what the two different values mean, and why there is a difference between the two?
The first 0.372 delta time is the effective time spent by the test cases runtime execution.
The second 0.574 is the effective time spent between the beginning and the end of the measurements.
Why a difference of 0.202 ? I suppose there is a context switching debt of some milliseconds, depending by the Test Cases and Test Suites cardinality.
Moreover, you may check here:
the 5.434 is the delta between 12.247 and 17.681, so between the effective beginning of the unit testing and the end of the execution of the last Test Suite

How to profile executions of Saxon

In the spirit of Dimitre Novatchev's answer at XSLT Performance, I want to create a profile that shows where the time consumed by my XSL transform has gone. Using the Saxon -TP:profile.html option, we created an “Analysis of Stylesheet Execution Time” HTML document.
At the top of that document, we see:
Total time: 1102316.688 milliseconds
This figure (1,102 seconds) corresponds with my measured program execution time.
However, the sum of the “total time (net)” column values is less than 2% of this total. I assume, per http://www.saxonica.com/html/documentation/using-xsl/performanceanalysis.html, that the “total time (net)” column values are reported in milliseconds.
I would normally work a profile from the top down, but in this case, I don’t want to invest effort into optimizing a template that is reported to have contributed less than 0.5% of my total response time.
How can I find out where my time has really gone? Specifically, how can I learn where the unreported 98% of my program's time has been consumed?

Resources