Fail VSTS build if SonarQube fails quality gate - sonarqube

We use VSTS build with standard SonarQube build steps:
SonarQube for MsBuild - Begin Analysis
... build
SonarQube for MsBuild - End Analysis
Some time after build I can see Analysis results in SonarQube - whether it Passed or Failed quality gate.
But the VSTS build is successful even if quality gate is Failed.
Is there a way to fail a VSTS build if quaility gate is failed?
Following this:
http://docs.sonarqube.org/display/SONAR/Breaking+the+CI+Build
I've tried looking for report-task.txt file, but I can't see it anywhere.
I can probably just run MSBuild.SonarQube.Runner.exe as command-line build step, as described here:
http://docs.sonarqube.org/display/SONAR/Analyzing+with+SonarQube+Scanner+for+MSBuild#AnalyzingwithSonarQubeScannerforMSBuild-AnalyzingfromtheCommandLine
But I thought I should first try standard Build Steps for SonarQube

Here is a link to failing the build on quality gate violations with 5.3 or later, it uses the SonarQube for MSBuild - Begin Analysis task
https://blogs.msdn.microsoft.com/visualstudioalm/2016/02/11/use-sonarqube-quality-gates-to-control-your-visual-studio-team-services-builds/
This updated task is not available with TFS 2015 Update 1 but is available in Update 2 RC1 and VSTS (VSO).
Regards,
Wes

I too had this requirement to fail the Build if sonar quality gate fails. I did created a power shell task after sonarqube display task. Here is the script to find the status:
function Get-SonarQubeStatus() {
# Step 1. Create a username:password pair
$credPair = "username:password"
# Step 2. Encode the pair to Base64 string
$encodedCredentials = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($credPair))
# Step 3. Form the header and add the Authorization attribute to it
$headers = #{ Authorization = "Basic $encodedCredentials" }
# Step 4. Make the GET request
$responseData = Invoke-WebRequest -Uri https://localhost/api/qualitygates/project_status?projectKey=<projectkey> -Method Get -Headers $headers -UseBasicParsing
#write-host $responseData.content
$x = $responseData.content | ConvertFrom-Json
$sonarQualityGateResult = $x.projectStatus.status
if($sonarQualityGateResult -eq "ERROR")
{
write-host "CI failed due to Sonarqube quality Gate"
exit 1
}
}

Related

Azure Pipelines artifact download from Azure Artifacts stopped working

I have set up a pair of Azure pipelines such successful completion of the first pipeline triggers the second pipeline. The first pipeline publishes a small JSON file to Azure Artifacts, and then the second pipeline downloads the JSON file.
Here are the two pipelines.
Pipeline one:
# Pipeline one
trigger:
- '*'
pool:
name: 'Default'
demands:
# I use this property to make sure it runs on the correct build agent
- Can_do_builds -equals true
steps:
- script: |
echo This is Pipeline One.
echo Running on $(Agent.MachineName)
echo Running in $(Pipeline.Workspace)
displayName: 'Display Pipeline One info'
- powershell: |
$json = #"
{
'build_id': '$(Build.BuildID)',
'build_number': '$(Build.BuildNumber)',
'build_type': '$(Build.Reason)',
'source_repo': '$(Build.Repository.Name)',
'source_branch': '$(Build.SourceBranchName)',
'source_commit_id': '$(Build.SourceVersion)'
}
"#
$f = '$(Pipeline.Workspace)/s/dropfile.json'
Add-Content -Path $f -Value $json
Write-Host Contents of $f
Write-Host "================"
Get-Content $f
displayName: Create the dropfile
- publish: dropfile.json
artifact: theDropfile
displayName: Publish the dropfile
Pipeline two:
# Pipeline two
trigger:
- master
pool:
name: 'Default'
demands:
# I use this property to make sure it runs on the other build agent
- Can_do_integration_tests -equals true
resources:
pipelines:
- pipeline: pipeline-one
source: my_workspace.pipeline-one
trigger:
enabled: true
branches:
include:
- master
- develop
- release_*
- passing-info-btwn-pipelines
steps:
- script: |
echo This is Pipeline Two.
echo Running on $(Agent.MachineName)
echo Running in $(Pipeline.Workspace)
echo Build reason is $(Build.Reason)
echo Triggering resource is $(Resources.TriggeringAlias)
echo Triggering category is $(Resources.TriggeringCategory)
displayName: 'Display Pipeline Two info'
# - task: DownloadPipelineArtifact#2
# displayName: Download the dropfile
# inputs:
# source: 'specific'
# project: 'QA'
# pipeline: 'my_workspace.pipeline-one' # if it will accept strings
# # pipeline: 12 # if it won't accept strings
# preferTriggeringPipeline: 'true'
# runVersion: 'latest'
# artifact: theDropfile
# path: '$(Pipeline.Workspace)/s/'
- download: pipeline-one
artifact: theDropfile
patterns: '**/*.json'
displayName: Download the dropfile the other way
- powershell: |
$f = "$(Pipeline.Workspace)/s/dropfile.json"
if( Test-Path $f ) {
Get-Content $f
} else {
Write-Host '$f not found'
}
displayName: Read the dropfile
Everything worked fine until our IT gang did two things:
Removed Just-In-Time.
Added our two VMs (self-hosted VMs, running Windows Server 2016 I think) to our companyname.local domain.
Pipeline one (the publishing one) still works. Every run publishes the artifact. I can verify that by navigating through the build log to the artifact link and downloading it.
But pipeline two (the downloading one) doesn't work anymore. It tries for 18 minutes to download the artifact, and then gives up. The logfile doesn't give much information, but it looks like the Azure Artifacts server is rejecting the agent's HTTP request. The entire contents of the logfile are as follows:
Starting: Download the dropfile the other way
==============================================================================
Task : Download pipeline artifact
Description : Download a named artifact from a pipeline to a local path
Version : 1.198.0
Author : Microsoft Corporation
Help : Download a named artifact from a pipeline to a local path
==============================================================================
Download from the specified build: #4927
Download artifact to: E:\acmbuild1\_work\5/pipeline-one/theDropfile
Using default max parallelism.
Max dedup parallelism: 192
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
ApplicationInsightsTelemetrySender correlated 2 events with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
##[error]No such host is known.
Finishing: Download the dropfile the other way
At first I thought that the failure was with the DownloadPipelineArtifact#2 task, which is why I tried using the download task instead. But I believe they're both the same code under the hood. In any case, the failure mode and the error message are the same.
What is causing the download failure? How can I fix it -- or how can the IT team fix it?
“No such host is known“ from your logfile seems like that the VMs added to your company domain can’t access to Azure DevOps.
You may ask IT teams to add certain Ips and URLs to allowlist. Comman domain urls(e.g) dev.azure.com and *.dev.azure.com
please refer to doc:Allowed address lists and network connections

Error on Publish Quality Gate Result in Azure DevOps Release pipeline

In my particular case I have to run the sonar analysis in the release, I can’t do it in the pipeline because it depends on the environment.
I have been using sonar in azure devops for several years but always running sonar from the pipeline
In this very special case I have to run the sonar in the release and this is where the error occurs.
Error in: Publish Quality Gate Result:
##[error]The "path" argument must be of type string. Received type undefined
enter image description here
enter image description here
Run Code Analysis: no Warning and no error:
enter image description here
if i run it manually on my pc it works fine:
sonar-scanner.bat -D"sonar.projectKey=SQLAA" -D"sonar.sources=Develop" -D"sonar.host.url=https://sssss.ssss.uuuu" -D"sonar.login=nnnnnnnnnnnn" -D"sonar.sql.dialect=tsql" -D"sonar.language=sql" -D"sonar.exclusions=DefinitionName/**" -D"sonar.scm.disabled=true" -D"sonar.verbose=true"
I use standalone scanner
is a simple release
enter image description here
It was tried as seen in the images, but it gives an error

Get Code Coverage from utPLSQL within Azure DevOps Pipeline?

I'm using utPLSQL 3.1.12.3589 on an Oracle 19c database.
The business logic to be tested is deployed in schema BUSINESS_LOGIC, the unit tests are deployed in schema UNIT_TESTS.
When I collect the code coverage within an Azure DevOps pipeline it seems to pick up only that from schema UNIT_TESTS. How can I get the coverage from schema BUSINESS_LOGIC?
utPLSQL-cli is called from a Powershell script (connection parameters are in the variables):
$argstr = #("run $db_user/$db_pw#$db_conn", "-f=UT_JUNIT_REPORTER", "-o=dbtest.xml", "-f=UT_COVERAGE_COBERTURA_REPORTER", "-o=dbcoverage.xml", "-f=UT_COVERAGE_HTML_REPORTER", "-o=html_coverage/coverage.html")
Start-Process -FilePath "\\my-server\utPLSQL-cli\bin\utplsql.bat" -ArgumentList $argstr -Wait -NoNewWindow
This is necessary because the tests are integrated in an Azure DevOps pipeline.
Thus the recommended approach to set the coverage does not work for me (
http://www.utplsql.org/utPLSQL/latest/userguide/coverage.html):
exec ut.run(ut_varchar2_list('BUSINESS_LOGIC'), ut_coverage_html_reporter());
I simply don't know where I could place the above statement to run the tests, gather the code coverage and report back to DevOps? I thought the command for the appropriate schema must be passed to the utPLSQL-cli?
I noticed that I used an up-to-date version of utPLSQL but not of utPLSQL-cli. Version 3.1.9 now supports a command line parameter to pass a list of coverage schemes: coverage-schemes.
This can be added to the argument list:
$argstr = #("run $db_user/$db_pw#$db_conn", ..., "--coverage-schemes=BUSINESS_LOGIC")
Now the coverage from BUSINESS_LOGIC is retrieved!

Sonarqube : The 'report' parameter is missing

I am using MSBuild. I have Java 8 installed.
I am running the following commands:
SonarQube.Scanner.MSBuild.exe begin /k:"ABC" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="8b839xxxxxxxxxxxxxxxxxxxxxxx6b00125bf92" /d:sonar.verbose=true
"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\msbuild.exe" /t:rebuild
SonarQube.Scanner.MSBuild.exe end /d:sonar.login="8b839xxxxxxxxxxxxxxxxxxxxxxx6b00125bf92"
The last step fails:
ERROR: Error during SonarQube Scanner execution
ERROR: The 'report' parameter is missing
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
The SonarQube Scanner did not complete successfully
12:53:21.909 Creating a summary markdown file...
12:53:21.918 Post-processing failed. Exit code: 1
The MSBuild version is greater than 14.
Java 8 is properly installed. Documentation indicates that Java 8 is adequate.
Any idea on what could be wrong?
Where do I add the -X switch? I tried on all 3 statements
Update :I installed Java SDK 9. Still same issue.
Update :With verbose logging and using /n naming parameter:
INFO: Analysis report generated in 992ms, dir size=4 MB
INFO: Analysis reports compressed in 549ms, zip size=1 MB
INFO: Analysis report generated in C:\ABC\.sonarqube\out\.sonar\scanner-report
DEBUG: Upload report
DEBUG: POST 400 http://localhost:9000/api/ce/submit?projectKey=ABC | time=1023ms
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 54.833s
INFO: Final Memory: 51M/170M
INFO: ------------------------------------------------------------------------
DEBUG: Execution getVersion
DEBUG: Execution stop
ERROR: Error during SonarQube Scanner execution
ERROR: The 'report' parameter is missing
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
Process returned exit code 1
The SonarQube Scanner did not complete successfully
Creating a summary markdown file...
Post-processing failed. Exit code: 1
I've struggled the same problem with SonarQube and I've finally found a solution:
You need to restart sonar service after using evaluation token.
Please note this isn't the answer, however I feel this feedback is valuable to getting this question answered.
I can reproduce this issue in POSTMan with a POST request to:
http://localhost:9000/api/ce/submit?projectKey=myProjectKey
This returns
{
"errors": [
{
"msg": "The 'report' parameter is missing"
}
]
}
You can get a similar error by removing the projectKey query parameter. I tried adding a report query parameter and received the same error:
http://localhost:9000/api/ce/submit?projectKey=brian3016&report=report
Given this, I feel there is a problem with their code. It should have included a report parameter when creating the POST request, but it failed to do so.
Verbose output seems to have changed from using the -X switch to /d:sonar.verbose=true. E.G.
SonarScanner.MSBuild.exe begin /k:"myProjectKey" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="myLogin" /d:sonar.verbose=true
Note the verbose logging didn't give me any valuable insight.
(Also note that the documentation currently says to use SonarQube.Scanner.MSBuild.exe, but the verbose logger told me to switch to SonarScanner.MSBuild.exe)
SO...how we we report this issue to someone that can fix it? Their documentation says to go to Stackoverflow. So here we are.
I thought it may have been an issue with a project. So I created a new project with nothing other than the startup template Console Application. Same error.
In my case SonarQube 7.9.1 (deployed with Helm to Kubernetes cluster) was missing temp directory /opt/sonarqube/temp/tc/work/Tomcat/localhost/ROOT after Helm rollback. No idea what happened to it.
Logfile /opt/sonarqube/logs/web.log inside SonarQube pod had this error:
2021.02.02 06:57:03 WARN web[AXdZ6l6MParQCncJACv3][o.s.s.w.ServletRequest] Can't read file part for parameter report
java.io.IOException: The temporary upload location [/opt/sonarqube/temp/tc/work/Tomcat/localhost/ROOT] is not valid
The fix was to exec into pod and create the missing directory. Would like to know the reason though...
The issue is with the sonar service starting up.
First try to stop the SonarStart.bat by using Ctrl+c, and then try to open localhost:9000 ( or whichever port you configured sonar server).
If it is still opening then go to task manager and search for wrapper.exe service and stop the service. If no service is found then go to:
Task manager>Details> and stop all java.exe process.
Note: If you running many Java applications, right-click the java.exe and choose goto service, and stop only those java.exe that belongs to AppX deployment.services
Now start sonarstart.bat as administrator..
today i face the same error when using jenkins to scanner the code.
get the error when POST /api/ce/submit and get 400 code by add the sonar.verbose=true
i use the below step to check reason
first to restart the sonarqube => failed
check the report file size by using "du -sh" get 108m and DB server support 1G => failed
login the sonar-qube server and check the access.log, web.log and another log, finally find the error reason " Processing of multipart/form-data request failed. No space left on device", so i check the server by command "df -h", some devices are used 100% => so i remove some no-using file and fix it!!!
check if you have enough memory
ex: free -m
In my case I had to upgrade memory.

Unable to integrate SonarQube analysis results with VSTS Build Summary

I am using Prepare, Run and Publish analysis tasks in VSTS to run the SonarQube analysis and publish the results to build summary. First two steps execute successfully but the 'Publish Analysis' task fails because it is not able to fetch the task for analysis ID. I get the following error message:
Could not fetch task for ID 'AWE9-wu8-fbfJflhFQ3-'
VSTS Publish Analysis Task Log:
2018-01-28T18:15:28.1037139Z ##[debug][SQ] Waiting for task 'AWE9-wu8-fbfJflhFQ3-' to complete.
2018-01-28T18:15:28.1037139Z ##[debug][SQ] API GET: '/api/ce/task' with query "{"id":"AWE9-wu8-fbfJflhFQ3-"}"
2018-01-28T18:15:28.1047138Z ##[debug][SQ] Publish task error: [SQ] Could not fetch task for ID 'AWE9-wu8-fbfJflhFQ3-'
2018-01-28T18:15:28.1047138Z ##[debug]task result: Failed
2018-01-28T18:15:28.1047138Z ##[error][SQ] Could not fetch task for ID 'AWE9-wu8-fbfJflhFQ3-'
2018-01-28T18:15:28.1047138Z ##[debug]Processed: ##vso[task.issue type=error;][SQ] Could not fetch task for ID 'AWE9-wu8-fbfJflhFQ3-'
2018-01-28T18:15:28.1047138Z ##[debug]Processed: ##vso[task.complete result=Failed;][SQ] Could not fetch task for ID 'AWE9-wu8-fbfJflhFQ3-'
2018-01-28T18:15:28.3907147Z ##[section]Finishing: Publish Analysis Result
I was seeing the exact same problem as Vignesh. Running SonarQube 6.7.1 and latest version of VSTS SonarQube extension.
I found out what the problem was; it's in the SonarQube VSTS Extensions (Prepare, Analyse & Publish).
The SonarQube extension uses basic authentication to communicate with the SonarQube API endpoint, and uses the token as username, and password as null.
The npm package 'request' (at least latest version 2.83.0), does not allow null passwords and returns 'auth() received invalid user or password'.
To fix it, the password should be set to an empty string instead.
Until the VSTS plugin is fixed by SonarSource, you can workaround the issue by manually editing the extension on your VSTS build machine. The file to edit is: <build location>\_tasks\SonarQubePublish_291ed61f-1ee4-45d3-b1b0-bf822d9095ef\4.0.0\common\helpers\request.js
Add a new row after row 22:
options.auth.pass = "";
The endresult should be something like:
var options = {
auth: endpoint.auth
};
if (query) {
options.qs = query;
options.useQuerystring = true;
}
options.auth.pass = "";
request.get(__assign({ method: 'GET', baseUrl: endpoint.url, uri: path, json: true }, options), function (error, response, body) {
I give no guarantees, but this worked for me.
We are using the TFS extension in version 4.0.1 and the failure is still there.
2018-02-07T10:34:41.7065486Z ##[debug][SQ] Waiting for task 'AWFv1Mcg5obW39zt_5IE' to complete.
2018-02-07T10:34:41.7065486Z ##[debug][SQ] API GET: '/api/ce/task' with query "{"id":"AWFv1Mcdgfdg39zt_5IE"}"
2018-02-07T10:34:41.7690509Z ##[debug][SQ] Publish task error: [SQ] Could not fetch task for ID 'AWFv1Mcdgfdg39zt_5IE'
2018-02-07T10:34:41.7690509Z ##[debug]task result: Failed
2018-02-07T10:34:41.7690509Z ##[error][SQ] Could not fetch task for ID 'AWFv1Mcdgfdg39zt_5IE'
2018-02-07T10:34:41.7690509Z ##[debug]Processed: ##vso[task.issue type=error;][SQ] Could not fetch task for ID 'AWFv1Mcdgfdg39zt_5IE'
2018-02-07T10:34:41.7690509Z ##[debug]Processed: ##vso[task.complete result=Failed;][SQ] Could not fetch task for ID 'AWFv1Mcdgfdg39zt_5IE'
See screenshot here
This was indeed caused by passing a null password to the request library.
A fix have been deployed (version 4.0.1 of the SonarQube extension, version 4.0.1 of the publish task). See https://jira.sonarsource.com/browse/VSTS-134

Resources