Sonar api : what is the purpose of analysisid? - sonarqube

I'm following the guide to be able to control job status based on sonar report : https://docs.sonarqube.org/display/SONARQUBE53/Breaking+the+CI+Build
Here, it is explained you get a taskid ,and when task is completed you retrieve a analysisId that can be used to get the qualitygate info using /api/qualitygates/project_status?analysisId=
I would have expected that this analysisId keeps persist and provides the same report over the time.
It does not sound to be the case. From my experience, the api project_status is always returning the last valid report, and past analysis are no more kept.
Here is the protocol I used to demonstrate
trigger first analysis , providing me a first report :
api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK
with a status ERROR and coverage = 80%
then i trigger the second analysis that give me another id:
api/qualitygates/project_status?analysisId=AWEnHBj53R-cEOOz4bny
with a status OK and coverage=90%
so now , if i call back the first analysisId api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK -> the report has been changed and is similar as the last one
Can someone explain me the concept of analysisId? cause this is not really an identifier of analysis here.

The link you provide in your question is to an archived, rather old version of the documentation. Since your comment reveals that you are on a current (6.7.1) version of SonarQube, then you'll benefit from using the current documentation.
In current versions, Webhooks allow you to notify external systems once analysis report processing is complete. The SonarQube Scanner for Jenkins makes it very easy to use webhooks in a pipeline, but even if you're not using Jenkins pipelines, you should still use webhooks instead of trying to retrieve this all manually. As shown in the docs (linked earlier) the webhook payload includes analysis timestamp, project name and key, and quality gate status.

Related

Why is Jenkins.get().getRootUrl() not available when generating DSL?

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.
It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

Quality Gate Failure in SonarQube does not fail the build in Teamcity

I set up a Build project in TeamCity and integrated Sonarqube with it. The project is getting build and even publish the report successfully in SonarQube console. But when the quality gate fails, it's not breaking the build. I searched and read about the build breaker, but its already supported with Sonarqube plugin of TeamCity as this document https://confluence.jetbrains.com/display/TW/SonarQube+Integration
Am I missing something to configure/or any gotcha? I tried to search a lot but didn't find any sort of proper documentation or lead on that.
Yeah I have to write a custom script using exit status to break the build. I used API to analyse the status of QG.
PROJECTKEY="%teamcity.project.id%"
QGSTATUS=`curl -s -u SONAR_TOKEN: http://SONAR_URL:9000/api/qualitygates/project_status?projectKey=$PROJECTKEY | jq '.projectStatus.status' | tr -d '"'`
if [ "$QGSTATUS" = "OK" ]
then
exit 0
elif [ "$QGSTATUS" = "ERROR" ]
then
exit 1
fi
I managed to fail the build based on Quality Gate settings using the sonar.qualitygate.wait=true parameter.
There's an example on their GitLab pipeline sample page: https://docs.sonarqube.org/latest/analysis/gitlab-cicd/
SonarQube plugin doesn't break the build when quality gate has failed. Why? Everything is described here: Why You Shouldn't Use Build Breaker
The main conclusion is:
[...] SonarSource doesn't want to continue the feature. [...]
Once we started using wallboards we stopped using the Build Breaker plugin, but still believed that using it was an okay practice. And then came SonarQube 5.2, which cuts the connection between the analyzer and the database. Lots of good things came with that cut, including a major change in architecture: analysis of source code is done on the analyzer side and all aggregate number computation is now done on the server side. Which means… that the analyzer doesn't know about the Quality Gate anymore. Only the server does, and since analysis reports are processed serially, first come first served, it can take a while before the Quality Gate result for a job is available.
In other words, from our perspective, the Build Breaker feature doesn't make sense anymore.
You have to verity quality gate status by your own. You can read how to do it here: Access quality gate status from sonarqube api
The answer to xpmatteo question:
Am I the only one that finds it difficult to understand what the quoted explanation means?
You have two tools. SonarScanner and SonarQube.
1) SonarScanner is executed on CI servers. It analyses source code and pushes analysis results to SonarQube sever.
2) SonarQube server processes data and knows if the new changes pass Quality Gates.
SonarScanner has no idea about the final result (pass or doesn't pass), so it cannot fail the build (it had such information before SQ 5.2, because it was processing all data and pushing only results to databases). It means the Build Breaker plugin has nonsense, because it won't work due to the current design. After executing the SonarScanner you have to poll the server and check the Quality Gates status. Then you may decide if the build should fail or not.
Follow below post that might help you.
https://docs.sonarqube.org/display/SONARQUBE45/Build+Breaker+Plugin
run your sonarqube task with the attribute "sonar.buildbreaker.skip".
eg: gradle clean build sonarqube publish -Dsonar.buildbreaker.skip=false
In my scenario CI is Github actions , irrespective of any CI tool sonar's status (Red/Green) of quality gates should be sent to your CI. you can browse the report status at this url http://:/api/ce/task?id= one report are generated .
you have to run this script after reports are generated to check the status and fail the job if SQ fail

Step Failure not reported by Composed Task Runner or reflected in Spring Cloud Dataflow Tables

Currently we are using Spring Cloud Dataflow to run a sequence of apps we have created based on a definition. Each of the apps we have made are spring batch jobs, with individual steps. The current issue we are having is that when one of these steps inside the app's batch job fails, it is reflected as expected in the step_execution, job_execution, and task_execution tables in the scdf database. However, we are not able to rerun any scdf job that has failed in an app from the top scdf level because it seems the row entry in the step_execution table for SCDF's step related to the overall app never propagates to FAILED in the status column, instead always being COMPLETED no matter what happens. Below I have included a picture which gets across what I am saying. test-simple8-test-app is the app we have created, while check-step, sleep-step, and should-error-step are steps inside the job for that app. You can see in the should-error-step that it has FAILED for both ExitCode and Status, while the entry for the app itself has COMPLETED for status and FAILED for ExitCode.
Relevant Table
We have tried altering what we report in the task_execution table since we saw CTR is looking for certain fields there, but it still seems it does not affect the Status column in step_executions. If we manually change the entry in the db to FAILED for that value, it proceeds as we would expect and as is normal for spring batch, in that it resumes the job from that app and re executes it.
Is there a good way to relieve this problem, or is it a problem with the way we are approaching it?
Edit: Added Flow Diagram for better clarity

SonarQube Generic Execution Report is ignored

The whole morning I have been trying to setup e2e tests reporting via SonarQube's Generic Execution, by using the Generic Test Data -> Generic Execution feature.
I created a custom xml report that gets added to the scan properties like this:
sonar.testExecutionReportPaths=**/e2e-report.xml
So far, SonarQube seems to completely ignore this property and I no attempt to parse the file in the logs. Has anyone made it work?
These are links by Sonar about the Generic Execution feature:
https://docs.sonarqube.org/display/SONAR/Generic+Test+Data
https://github.com/SonarSource/sonarqube/blob/master/sonar-scanner-engine/src/main/java/org/sonar/scanner/genericcoverage/GenericTestExecutionSensor.java
This is a SonarQube 6.2+ feature. Make sure to use an appropriate SonarQube version.
In addition sonar.testExecutionReportPaths does not allow matchers (like *).
Please provide relative or absolute paths, comma separated.
See also:
The official documentation of the Generic Test Data feature
The source code, that looks up the generic execution files

Sonar: Execution time history of single test

TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!
The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c

Resources