Sonar: Execution time history of single test - sonarqube

TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!

The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c

Related

Hibernate: DB not reliably rolled back at end of UnitTests in spite of #Transactional

We have a large application using Spring for application setup, initialisation and "wiring" and Hibernate as persistence framework. For that application we have a couple of unit tests which are causing us headaches because they again and again run "red" when executing them on our Jenkins build server.
These UnitTests execute and verify some rather complex and lengthy core-operations of our application and thus we considered it too complex and too much effort to mock the DB. Instead these UTs run against a real DB. Before the UTs are executed we create the objects required (the "pre-conditions"). Then we run a Test and then we verify the creation of certain objects, their status and values etc. All plain vanilla...
Since we run multiple tests in sequence which all need the same starting point these tests derive from a common parent class that has an #Transactional annotation. The purpose of that is that the DB is always rolled back after each unit-test so that the subsequent test can start from the same baseline.
That approach is working perfectly and reliably when executing the unit-tests "locally" (i.e. running a "mvn verify" on a developer's workstation). However, when we execute the very same tests on our Jenkins, then - not always but very often - these tests fail because there are too many objects being found after a test or due to constraint violations because certain objects already exist that shouldn't yet be there.
As we found out by adding lots of log-statements (because it's otherwise impossible to observe code running on Jenkins) the reason for these failures is, that the DB is occasionally not properly rolled back after a prior test. Thus there are left-overs from the previous test(s) in the DB and these then cause issue during subsequent tests.
What's puzzling us most is:
why are these tests failing ONLY when we execute them on Jenkins, but never when we run the very same tests locally? We are using absolute identical maven command line and code here, also same Java version, Maven version, etc.
We are by now sure that this has nothing to do with UTs being executed in parallel as we initially suspected. We disabled all options to run UTs in parallel, that the Maven Surefire plugin offers. Our log-statements also clearly show that the tests are perfectly serialized but again and again objects "pile up", i.e. after each test-method, the number of these objects that were supposed to have been removed/rolled back at the end of the test, are still there and their number increases which each test.
We also observed a certain "randomness" with this effect. Often, the Jenkins builds run fine for several commits and then suddenly (even without any code change, just by retriggering a new build of the same branch) start to run red. The DB, however, is getting re-initialized before each build & test-run, so that can not be the source of this effect.
Any idea anyone what could cause this? Why do the DB rollbacks that are supposed to be triggered by the #org.springframework.transaction.annotation.Transactional annotation work reliable on our laptops but not on our build server? Any similar experiences and findings on that anyone?

How to Run the methods one after the other in cucumber

I am trying to implementing the Automate the SonarQube from Cucmber, below is code of feature file
Feature: Bring up the sonarqube instance and scan the code from the SonarScanner
Scenario: Perform certain Actions on the sonarQube
Given Check the SOnarQube Instace is Up or Not
When Scan the the Code from the Sonar Scanner
Then List the project Names from SonarQube
Then List out the Bugs and Issues of the Project
In the step definition file on the Given block i am checking weather SonarQube is up or not based, if its not up running the Startsonar.bat file.. but here to bring SonarQube up need 2 to 3 mins time, but in the mean time, When and then block is starting executing.. i need an assistance while running the Given block other block should in idle state for 2 to 3 mins once Given block completes When block should start.
Regards,
Nandan

How does SonarQube calculate coverage through JaCoCo?

JaCoCo just outputs jacococ.exec which is the input for Sonar. In that file, there seems to be only the info:
- Class name
- Total Class Probes
- Executed Class Probes
But then, SonarQube cannot rely solely on these values as it needs to tell you which are the exact lines unconvered, so Sonar is performing an analysis on itself. So how does it use Jacoco report? And why does it need it?
So how does it use Jacoco report? And why does it need it?
SonarQube itself alone doesn't / can't know anything about which tests you actually executed and how they cover your code. To obtain this information it relies on third-party test coverage tools. In case of Java it relies on data collected and provided by JaCoCo as explained in answer on similar question from you (JaCoCo collects execution information in exec file, and obtains line numbers and other information from class files during generation of report), or SonarQube can rely on data in "generic format".

How I can set up Jmeter to give me daily results

I've started using Jmeter to run daily performance tests, and have also just figured out how to produce an HTML dashboard.
What I need to do now is find a way to run Jmeter every day, producing an HMTL dashboard of the results, but with comparisons of the results of the last few days. This would mean adding to the data of existing files instead of creating a new HTML dashboard every day.
Can anyone help me with this?
The easiest solution is adding your JMeter test under Jenkins control.
Jenkins provides:
Flexible mechanism of scheduling a job
There is a Performance Plugin for Jenkins which automatically analyses current and previous builds and displays performance trend chart on JMeter Dashboard
Alternatively you can schedule JMeter runs using i.e. Windows Task Scheduler and compare the current run with the previous one using Merge Results plugin

Distributed JMeter test fails with java error but test will run from JMeter UI (non-distributed)

My goal is to run a load test using 4 Azure servers as load generators and 1 Azure server to initiate the test and gather results. I had the distributed test running and I was getting good data. But today when I remote start the test 3 of the 4 load generators fail with all the http transactions erroring. The failed transactions log the following error:
Non HTTP response message: java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory (Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory)
I confirmed the presence of commons-logging-1.2.jar in the jmeter\lib folder on each machine.
To try to narrow down the issue I set up one Azure server to both initiate the load and run JMeter-server but this fails too. However, if I start the test from the JMeter UI on that same server the test runs OK. I think this rules out a problem in the script or a problem with the Azure machines talking to each other.
I also simplified my test plan down to where it only runs one simple http transaction and this still fails.
I've gone through all the basics: reinstalled jmeter, updated java to the latest version (1.8.0_111), updated the JAVA_HOME environment variable and backed out the most recent Microsoft Security update on the server. Any advice on how to pick this problem apart would be greatly appreciated.
I'm using JMeter 3.0r1743807 and Java 1.8
The Azure servers are running Windows Server 2008 R2
I did get a resolution to this problem. It turned out to be a conflict between some extraneous code in a jar file and a component of JMeter. It was “spooky” because something influenced the load order of referenced jar files and JMeter components.
I had included a jar file in my JMeter script using the “Add directory or jar to classpath” function in the Test Plan. This jar file has a piece of code I needed for my test along with many other components and one of those components, probably a similar logging function, conflicted with a logging function in JMeter. The problem was spooky; the test ran fine for months but started failing at the maximally inconvenient time. The problem was revealed by creating a very simple JMeter test that would load and run just fine. If I opened the simple test in JMeter then, without closing JMeter, opened my problem test, my problem test would not fail. If I reversed the order, opening the problem test followed by the simple test then the simple test would fail too. Given that the problem followed the order in which things loaded I started looking at the jar files and found my suspect.
When I built the script I left the jar file alone thinking that the functions I need might have dependencies to other pieces within the jar. Now that things are broken I need to find out if that is true and happily it is not. So, to fix the problem I changed the extension on my jar file to zip then edited it in 7-zip. I removed all the code except what I needed. I kept all the folders in the path to my needed code, I did this for two reasons; I did not have to update my code that called the functions and when I tried changing the path the functions did not work.
Next I changed the extension on the file back to jar and changed the reference in JMeter’s “Add directory or jar to classpath” function to point to the revised jar. I haven’t seen the failure since.
Many thanks to the folks who looked at this. I hope the resolution will help someone out.

Resources