How to Run the methods one after the other in cucumber - sonarqube

I am trying to implementing the Automate the SonarQube from Cucmber, below is code of feature file
Feature: Bring up the sonarqube instance and scan the code from the SonarScanner
Scenario: Perform certain Actions on the sonarQube
Given Check the SOnarQube Instace is Up or Not
When Scan the the Code from the Sonar Scanner
Then List the project Names from SonarQube
Then List out the Bugs and Issues of the Project
In the step definition file on the Given block i am checking weather SonarQube is up or not based, if its not up running the Startsonar.bat file.. but here to bring SonarQube up need 2 to 3 mins time, but in the mean time, When and then block is starting executing.. i need an assistance while running the Given block other block should in idle state for 2 to 3 mins once Given block completes When block should start.
Regards,
Nandan

Related

SonarQube 7.7+, Gitlab Plugin

As of SonarQube 7.7 and up, the Sonar - Gitlab plugin is unavailable for compatibility reasons.
In the mean time, is there a way to fail a Gitlab CI pipeline on Quality Gate fail?
The Sonar Scanner creates a little folder in the scan execution folder which contains a file report-task.txt.
- scan_exec_folder
| - .scannerwork
| | - report-task.txt
This report-task.txt file contains basic info about the current scan, including
SonarQube server URL
ceTaskUrl (namely, the Compute Engine Task URL of the current scan)
By curling the ceTaskURL, you may get the analysis status, and when the analysis is successful, the analysisId. (You'll almost certainly have to wait for the analysis to complete. You can use a while on the value of the status for example.)
Next, curling the SonarQube server URL on path
/api/qualitygates/project_status?analysisId=${yourAnalysisId}
will return the result of the Quality Gate computation in a json document. If the status is ERROR, you know that at least one criteria has been failed.
A bit of tweaking with greps and awks will allow you to script this procedure and incorporate it as a task in your Gitlab CI pipeline.

Go-CD - How do you stop generating artifacts when JUNIT or JASMINE or Regression Test fails in Go-CD

We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?
Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.

How I can set up Jmeter to give me daily results

I've started using Jmeter to run daily performance tests, and have also just figured out how to produce an HTML dashboard.
What I need to do now is find a way to run Jmeter every day, producing an HMTL dashboard of the results, but with comparisons of the results of the last few days. This would mean adding to the data of existing files instead of creating a new HTML dashboard every day.
Can anyone help me with this?
The easiest solution is adding your JMeter test under Jenkins control.
Jenkins provides:
Flexible mechanism of scheduling a job
There is a Performance Plugin for Jenkins which automatically analyses current and previous builds and displays performance trend chart on JMeter Dashboard
Alternatively you can schedule JMeter runs using i.e. Windows Task Scheduler and compare the current run with the previous one using Merge Results plugin

How can i get a total count of passes failures at the Project level in Cucumber JVM Reports for Jenkins

From the Jenkins dashboard I have Cucumber JVM for each job that's run. I can see my feature with for example 4 passed and 1 failed scenario.
In a particular tab (project/application/what have you) i'll have a series of features.
Is there a way to get a total count of passes/failures for the entire tab (all the features in the tab)? An additional plug in?
Think this plugin should do the work:
https://github.com/damianszczepanik/cucumber-reporting

Sonar: Execution time history of single test

TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!
The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c

Resources