I set up a Build project in TeamCity and integrated Sonarqube with it. The project is getting build and even publish the report successfully in SonarQube console. But when the quality gate fails, it's not breaking the build. I searched and read about the build breaker, but its already supported with Sonarqube plugin of TeamCity as this document https://confluence.jetbrains.com/display/TW/SonarQube+Integration
Am I missing something to configure/or any gotcha? I tried to search a lot but didn't find any sort of proper documentation or lead on that.
Yeah I have to write a custom script using exit status to break the build. I used API to analyse the status of QG.
PROJECTKEY="%teamcity.project.id%"
QGSTATUS=`curl -s -u SONAR_TOKEN: http://SONAR_URL:9000/api/qualitygates/project_status?projectKey=$PROJECTKEY | jq '.projectStatus.status' | tr -d '"'`
if [ "$QGSTATUS" = "OK" ]
then
exit 0
elif [ "$QGSTATUS" = "ERROR" ]
then
exit 1
fi
I managed to fail the build based on Quality Gate settings using the sonar.qualitygate.wait=true parameter.
There's an example on their GitLab pipeline sample page: https://docs.sonarqube.org/latest/analysis/gitlab-cicd/
SonarQube plugin doesn't break the build when quality gate has failed. Why? Everything is described here: Why You Shouldn't Use Build Breaker
The main conclusion is:
[...] SonarSource doesn't want to continue the feature. [...]
Once we started using wallboards we stopped using the Build Breaker plugin, but still believed that using it was an okay practice. And then came SonarQube 5.2, which cuts the connection between the analyzer and the database. Lots of good things came with that cut, including a major change in architecture: analysis of source code is done on the analyzer side and all aggregate number computation is now done on the server side. Which means… that the analyzer doesn't know about the Quality Gate anymore. Only the server does, and since analysis reports are processed serially, first come first served, it can take a while before the Quality Gate result for a job is available.
In other words, from our perspective, the Build Breaker feature doesn't make sense anymore.
You have to verity quality gate status by your own. You can read how to do it here: Access quality gate status from sonarqube api
The answer to xpmatteo question:
Am I the only one that finds it difficult to understand what the quoted explanation means?
You have two tools. SonarScanner and SonarQube.
1) SonarScanner is executed on CI servers. It analyses source code and pushes analysis results to SonarQube sever.
2) SonarQube server processes data and knows if the new changes pass Quality Gates.
SonarScanner has no idea about the final result (pass or doesn't pass), so it cannot fail the build (it had such information before SQ 5.2, because it was processing all data and pushing only results to databases). It means the Build Breaker plugin has nonsense, because it won't work due to the current design. After executing the SonarScanner you have to poll the server and check the Quality Gates status. Then you may decide if the build should fail or not.
Follow below post that might help you.
https://docs.sonarqube.org/display/SONARQUBE45/Build+Breaker+Plugin
run your sonarqube task with the attribute "sonar.buildbreaker.skip".
eg: gradle clean build sonarqube publish -Dsonar.buildbreaker.skip=false
In my scenario CI is Github actions , irrespective of any CI tool sonar's status (Red/Green) of quality gates should be sent to your CI. you can browse the report status at this url http://:/api/ce/task?id= one report are generated .
you have to run this script after reports are generated to check the status and fail the job if SQ fail
Related
I don't have any experience in non functional testing. But I have just written a jmeter test and hooked up in gitlab ci. I am generating a testresults.jtl and added in artifacts.
But I am not sure how to read the results and how to compare it with the previous results to see or get notified if there are any changes in performance.
What should I do?
You can consider using Taurus tool which:
Has JUnit XML Reporter producing JUnit-style XML result files which can be "understood" by GitLab CI
Has Pass/Fail Criteria subsystem where you can specify the error thresholds, if i.e. response time will be higher than the defined value Taurus will stop with non-zero exit status code so GitLab automatically will fail the build on getting non-zero exit code.
As of SonarQube 7.7 and up, the Sonar - Gitlab plugin is unavailable for compatibility reasons.
In the mean time, is there a way to fail a Gitlab CI pipeline on Quality Gate fail?
The Sonar Scanner creates a little folder in the scan execution folder which contains a file report-task.txt.
- scan_exec_folder
| - .scannerwork
| | - report-task.txt
This report-task.txt file contains basic info about the current scan, including
SonarQube server URL
ceTaskUrl (namely, the Compute Engine Task URL of the current scan)
By curling the ceTaskURL, you may get the analysis status, and when the analysis is successful, the analysisId. (You'll almost certainly have to wait for the analysis to complete. You can use a while on the value of the status for example.)
Next, curling the SonarQube server URL on path
/api/qualitygates/project_status?analysisId=${yourAnalysisId}
will return the result of the Quality Gate computation in a json document. If the status is ERROR, you know that at least one criteria has been failed.
A bit of tweaking with greps and awks will allow you to script this procedure and incorporate it as a task in your Gitlab CI pipeline.
We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?
Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.
I'm following the guide to be able to control job status based on sonar report : https://docs.sonarqube.org/display/SONARQUBE53/Breaking+the+CI+Build
Here, it is explained you get a taskid ,and when task is completed you retrieve a analysisId that can be used to get the qualitygate info using /api/qualitygates/project_status?analysisId=
I would have expected that this analysisId keeps persist and provides the same report over the time.
It does not sound to be the case. From my experience, the api project_status is always returning the last valid report, and past analysis are no more kept.
Here is the protocol I used to demonstrate
trigger first analysis , providing me a first report :
api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK
with a status ERROR and coverage = 80%
then i trigger the second analysis that give me another id:
api/qualitygates/project_status?analysisId=AWEnHBj53R-cEOOz4bny
with a status OK and coverage=90%
so now , if i call back the first analysisId api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK -> the report has been changed and is similar as the last one
Can someone explain me the concept of analysisId? cause this is not really an identifier of analysis here.
The link you provide in your question is to an archived, rather old version of the documentation. Since your comment reveals that you are on a current (6.7.1) version of SonarQube, then you'll benefit from using the current documentation.
In current versions, Webhooks allow you to notify external systems once analysis report processing is complete. The SonarQube Scanner for Jenkins makes it very easy to use webhooks in a pipeline, but even if you're not using Jenkins pipelines, you should still use webhooks instead of trying to retrieve this all manually. As shown in the docs (linked earlier) the webhook payload includes analysis timestamp, project name and key, and quality gate status.
TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!
The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c