Quality Gate does not fail when conditions for success are not met - sonarqube

I have established Quality Gate for my Jenkins project via SonarQube. One of my projects have no tests at all, so in the analysis I see that the code coverage is 0%. By the quality gate rules (<60% coverage = fail) my pipeline should return an error. However, this does not happen. The quality gate says that the analysis was a success and quality gate is 'OK'. In another project, I removed some tests to make coverage be <60% and the quality gate passed once again, even though it was meant to fail.
I had an error related to the analysis always returning 0% coverage before, but managed to fix it (with help from this link). Found a lot of articles with the similar questions but with no answers on any of them. This post looks promising but I cannot find the suitable alternative to its suggestion.
It is worth mentioning that the analysis stage is done in parallel with another stage (to save some time). The Quality Gate stage comes shortly afterwards.
The relevant code I use to initialise the analysis for my project is (org.jacoco... bit is the solution to 0% coverage error I mentioned above):
sh "mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent verify sonar:sonar -Dsonar.host.url=${env.SONAR_HOST_URL} -Dsonar.login=${env.SONAR_AUTH_TOKEN} -Dsonar.projectKey=${projectName} -Dsonar.projectName=${projectName} -Dsonar.sources=. -Dsonar.java.binaries=**/* -Dsonar.language=java -Dsonar.exclusions=$PROJECT_DIR/src/test/java/** -f ./$PROJECT_DIR/pom.xml"
The full quality gate code (to clarify how my quality gate starts and finishes):
stage("Quality Gate") {
steps {
timeout(time: 15, unit: 'MINUTES') { // If analysis takes longer than indicated time, then build will be aborted
withSonarQubeEnv('ResearchTech SonarQube'){
script{
// Workaround code, since we cannot have a global webhook
def reportFilePath = "target/sonar/report-task.txt"
def reportTaskFileExists = fileExists "${reportFilePath}"
if (reportTaskFileExists) {
def taskProps = readProperties file: "${reportFilePath}"
def authString = "${env.SONAR_AUTH_TOKEN}"
def taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
def taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus == "SUCCESS") {
echo "Background tasks are completed"
} else {
while (true) {
sleep 10
taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus != "IN_PROGRESS" && taskStatus != "PENDING") {
break;
}
}
}
} else {
error "Haven't found report-task.txt."
}
def qg = waitForQualityGate() // Waiting for analysis to be completed
if(qg.status != 'OK'){ // If quality gate was not met, then present error
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
}

What is shown in the SonarQube UI for the project? Does it show that the quality gate failed, or not?
I don't quite understand what you're doing in that pipeline script. It sure looks like you're calling "waitForQualityGate()" twice, but only checking for error on the second call. I use scripted pipeline, so I know it would look slightly different.
Update:
Based on your additional comment, if the SonarQube UI says that it passed the quality gate, then that means there's nothing wrong with your pipeline code (at least with respect to the quality gate). The problem will be in the definition of your quality gate.
However, I would also point out one other error in how you're checking for the background task results.
The possible values of "taskStatus" are "SUCCESS", "ERROR", "PENDING", and "IN_PROGRESS". If you need to determine whether the task is still running, you have to check for either of the last two values. If you need to determine whether the task is complete, you need to check for either of the first two values. You're checking for completion, but you're only checking for "SUCCESS". That means if the task failed, which it would if the quality gate failed (which isn't happening here), you would continue to wait for it until you timed out.

Related

How to run conditional task in Airflow with previous http operator requested value

I am creating a dag file, with multiple SimpleHttpOperator request. I need to skipped the next task if previous task returned a failed status. Only continue with success status.
Tried with BranchPythonOperator, which inside i will decide which task to run next. But seem it is not working.
sample of request_info will return
{
"data":{
"name":"Allan",
"age":"26",
"gender":"male",
"country":"California"
},
"status":"failed"
}
request_info = SimpleHttpOperator(
task_id='get_info',
endpoint='get/information',
http_conn_id='localhost',
data=({"guest":"1"})
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
update_info = SimpleHttpOperator(
task_id='update_info',
endpoint='update/information',
http_conn_id='localhost',
data=("{{ti.xcom_pull(task_ids='request_info')}}")
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
skipped_task = DummyOperator(
task_id='skipped',
dag=dag
)
skip_task = BranchPythonOperator(
task_id='skip_task',
python_callable=next_task,
dag=dag
)
def next_task(**kwangs):
status="ti.xcom_pull(task_ids='request_info')"
if status == "success":
return "update_info"
else:
return "skipped_task"
return "skipped_task"
request_info.set_downstream(skip_task)
#need set down stream base on ststus
I expect the flow should be, after getting the info. Identify status, if success, proceed update else proceed skipped.
Generally tasks are supposed to be atomic, which means that they operate independently of one another (besides their order of execution). You can share more complex relations and dependencies by using XCom and airflow trigger rules.

NightwatchJS: Custom Command not failing on error

Here is my custom command:
exports.command = function (element, time, debug) {
let waitTime = time || 10000
if (debug) {
return this
.log('waiting ' + waitTime + 'ms for: ' + element)
.waitForElementVisible(element, waitTime)
}
return this
.waitForElementVisible(element, waitTime)
}
I have also set this variable in the globalModules: abortOnFailure: true.
When I call this in a pageObject though like this:
findElement() {
this.waitFor('#driversLicenseNumbers');
return this
}
The object isn't found (which is expected and intended since I'm upgrading to Nightwatch v1.0.14) and the error message is logged to the console, but the test doesn't fail.
× Timed out while waiting for element <#driversLicenseNumbers> to be
present for 10000 milliseconds. - expected "visible" but got: "not
found"
Does anyone know what I'm doing wrong here?
There is already an open issue on the Nightwatch issues board regarding this specific problem. Here it is!
This behavior affects custom_commands in nightwatch#1.0.15 & nightwatch#0.9.21 (according to the BUG report, yet I am running nightwatch#0.9.21 & this behavior is not reproducible for me).
Basically your test fails, but it does so silently, at the end of the test, where you get the timeout error.
Proposed fix: Install a different version (npm install --save-dev nightwatch#0.9.x), or a suitable version that hasn't introduced the defect yet.
Cheers!

PR preview analysis causes waitForQualityGate to fail

Today i encountered an anomaly when setting up a Jenkins pipeline script which checks both the quality gate
and annotates pull requests with any new issues.
Our setup contains of:
SonarQube 6.2
BitBucket Stash
Jenkins 2 (with 2 slaves)
AmadeusITGroup stash plugin
Part of the pipeline script:
node(node_label) {
stage("SonarQube analysis") {
withSonarQubeEnv('SonarQube') {
def sonarQubeCommand = "org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar " +
"-Dsonar.host.url=https://sonar-url " +
"-Dsonar.login=sonarqube " +
"-Dsonar.password=token " +
"-Dsonar.language=java " +
"-Dsonar.sources=. " +
"-Dsonar.inclusions=**/src/main/java/**/*"
if (pr.id != '') {
sonarQubeCommand = sonarQubeCommand +
" -Dsonar.analysis.mode=preview" +
" -Dsonar.stash.notification=true " +
" -Dsonar.stash.project=" + pr.project_key +
" -Dsonar.stash.repository=" + pr.repository_slug +
" -Dsonar.stash.pullrequest.id=" + pr.id +
" -Dsonar.stash.password=token"
}
pipeline.mvn(sonarQubeCommand)
}
}
}
stage("Check Quality Gate") {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
waitUntil {
// Sometimes an analysis will get the status PENDING meaning it still needs to be analysed.
if (qg.status == 'PENDING') {
qg = waitForQualityGate()
return false
} else {
return true
}
}
node(node_label) {
if (qg.status != 'OK') {
bitbucket.comment(pr, "_${env.JOB_NAME}#${env.BUILD_NUMBER}:_ **[✖ BUILD FAILURE](${build_url})**")
bitbucket.approve(pr, false)
pipeline.cleanWorkspace()
error "Pipeline aborted due to quality gate failure: ${qg.status}"
} else {
bitbucket.comment(pr, "_${env.JOB_NAME}#${env.BUILD_NUMBER}:_ **[✔ BUILD SUCCESS](${build_url})**")
bitbucket.approve(pr, true)
}
}
}
}
Btw: Without the waitUntil the pipeline failed because the task status was PENDING in SonarQube.
So the example in the SonarSource blog didn't quite work for me.
Now for the details how this pipeline fails:
When using sonar.analysis.mode=preview as parameter on the maven command
the jenkins job log will not contain the SonarQube analysis task id.
This will result in a failure in the pipeline script on command waitForQualityGate.
The message reads:
Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
As soon as i remove the sonar.analysis.mode=preview parameter the jenkins log reads a line like:[INFO] More about the report processing at https://sonar-url/api/ce/task?id=AVyHXjcsesZZZhqzzCSf
This line makes the waitForQualityGate command succeed normally.
However, this has an unwanted side effect besides the polution of the project in SonarQube with PR results.
The side effect is that when an issue was added in the pull request this won't be reported on the pull request in stash.
It always reports zero new issues and this is clearly wrong.
As it's not a preview analysis anymore i can see the new issue on the SonarQube server.
So somehow i have to make a choice now between having pull requests annotated with new issues or
checking the quality gate.
Obviously i would like to do both.
I have chosen to let pull request annotation work properly and skip the check on the quality gate for now.
Question remains am i doing something wrong here or do i have to wait for new versions of scanner and/or stash plugin to have this resolved?
At sonar's server go to:
Administration > Configuration > General Settings > Webhooks:
Name: Jenkins or something like that
URL: http://127.0.0.1:8080/sonarqube-webhook/
Where, URL is Jenkins host.
Cause:
By default webkooks on sonar are https, and my jenkins was working over http.

Simulink display local error

Is it possible to display the local error of a integrator step of a variable time step solver in simulink. I would like to find out why simulink is taking small integrator steps. Since the stepsize is depending on the local error of a integration it would be helpful to record the local error. Is this possible?
Have you tried using the Simulink Debugger? If you set breakpoint on the input and output of the block of interest, and run the simulation up to a time when the steps start to become small, then you may be able to determine what's happening.
You might also play around with zero crossing detection. A pretty good discussion of its basics can be found here.
This is possible from the debugger (as Phil Goddard hinted). Start model in with debugger (from matlab console):
>> sldebug mdl
Enable "Solver trace level 1"
>> strace 1
Enable "Break on failed integration step"
>> xbreak
Start simulation:
>> continue
The simulation will then break when the local error is too large. For example:
[TM = 0.035250948751817182 ] Start of Major Time Step
[Tm = 0.035250948751817182 ] [Hm = 0.0009443691112440444 ] Start of Solver Phase
[Tm = 0.03525094875181718 ] [Hm = 0.0009443691112440444 ] Begin Integration Step
[Tn = 0.03525094875181718 ] [Hn = 0.0009443691112440444 ] Begin Newton Iteration
[Tf = 0.03619531786306122 ] [Hf = 0.0009443691112440444 ] Fail [Er = 6.8210e+00 ] [Ix = 1]
Detected integation step failure. Interrupting model execution
The Er value is the local error, Ix is the state index. To find the corresponding block type:
>> states
With output
Continuous States:
Idx Value (system:block:element Name 'BlockName')
0 -7.96155746500428e-06 (0:0:0 CSTATE 'mdl/x')
1 1.630758262432841e-12 (0:1:0 CSTATE 'mdl/y')

Marking upstream Jenkins/Hudson as failed if downstream job fails

I am using Parameterized Trigger Plugin to trigger a downstream build.
How do I specify that my upstream job should fail if the downstream fails? The upstream job is actually is dummy job with parameters being passed to the downstream.
Make sure you are using the correct step to execute your downstream jobs; I discovered that since I was executing mine as a "post build step", I didn't have the "Block until the triggered projects finish their builds" option. Changing that to "build task" as opposed to a "post build task", allowed me to find the options you are looking for within the Parameterized Trigger Plugin.
this code will mark the upstream build unstable/failed based on downstream job status.
/*************************************************
Description: This script needs to put in Groovy
Postbuild plugin of Jenkins as a Post Build task.
*************************************************/
import hudson.model.*
void log(msg) {
manager.listener.logger.println(msg)
}
def failRecursivelyUsingCauses(cause) {
if (cause.class.toString().contains("UpstreamCause")) {
def projectName = cause.upstreamProject
def number = cause.upstreamBuild
upstreamJob = hudson.model.Hudson.instance.getItem(projectName)
if(upstreamJob) {
upbuild = upstreamJob.getBuildByNumber(number)
if(upbuild) {
log("Setting to '" + manager.build.result + "' for Project: " + projectName + " | Build # " + number)
//upbuild.setResult(hudson.model.Result.UNSTABLE)
upbuild.setResult(manager.build.result);
upbuild.save()
// fail other builds
for (upCause in cause.upstreamCauses) {
failRecursivelyUsingCauses(upCause)
}
}
} else {
log("No Upstream job found for " + projectName);
}
}
}
if(manager.build.result.isWorseOrEqualTo(hudson.model.Result.UNSTABLE)) {
log("****************************************");
log("Must mark upstream builds fail/unstable");
def thr = Thread.currentThread()
def build = thr.executable
def c = build.getAction(CauseAction.class).getCauses()
log("Current Build Status: " + manager.build.result);
for (cause in c) {
failRecursivelyUsingCauses(cause)
}
log("****************************************");
}
else {
log("Current build status is: Success - Not changing Upstream build status");
}
Have a look at the following response: Fail hudson build with groovy script. You can get access to the upstream job and fail its build BUT... be careful with the fact that Hudson/Jenkins post-build actions right now do not allow to specify any ordering: if your groovy script is specified besides other post-build actions, and those actions affect the result of the build (i.e.: parsing of test results), then you won't be able to update the status of the upstream job if Jenkins decides to run them after your groovy script.
Under Build step configure Trigger/Call builds on other projects, choose the downstream job. Select "Block until triggered project finish their build". Save the default settings under it. This settings will make upstream job failed is downstream is failed.

Resources