Marking upstream Jenkins/Hudson as failed if downstream job fails - continuous-integration

I am using Parameterized Trigger Plugin to trigger a downstream build.
How do I specify that my upstream job should fail if the downstream fails? The upstream job is actually is dummy job with parameters being passed to the downstream.

Make sure you are using the correct step to execute your downstream jobs; I discovered that since I was executing mine as a "post build step", I didn't have the "Block until the triggered projects finish their builds" option. Changing that to "build task" as opposed to a "post build task", allowed me to find the options you are looking for within the Parameterized Trigger Plugin.

this code will mark the upstream build unstable/failed based on downstream job status.
/*************************************************
Description: This script needs to put in Groovy
Postbuild plugin of Jenkins as a Post Build task.
*************************************************/
import hudson.model.*
void log(msg) {
manager.listener.logger.println(msg)
}
def failRecursivelyUsingCauses(cause) {
if (cause.class.toString().contains("UpstreamCause")) {
def projectName = cause.upstreamProject
def number = cause.upstreamBuild
upstreamJob = hudson.model.Hudson.instance.getItem(projectName)
if(upstreamJob) {
upbuild = upstreamJob.getBuildByNumber(number)
if(upbuild) {
log("Setting to '" + manager.build.result + "' for Project: " + projectName + " | Build # " + number)
//upbuild.setResult(hudson.model.Result.UNSTABLE)
upbuild.setResult(manager.build.result);
upbuild.save()
// fail other builds
for (upCause in cause.upstreamCauses) {
failRecursivelyUsingCauses(upCause)
}
}
} else {
log("No Upstream job found for " + projectName);
}
}
}
if(manager.build.result.isWorseOrEqualTo(hudson.model.Result.UNSTABLE)) {
log("****************************************");
log("Must mark upstream builds fail/unstable");
def thr = Thread.currentThread()
def build = thr.executable
def c = build.getAction(CauseAction.class).getCauses()
log("Current Build Status: " + manager.build.result);
for (cause in c) {
failRecursivelyUsingCauses(cause)
}
log("****************************************");
}
else {
log("Current build status is: Success - Not changing Upstream build status");
}

Have a look at the following response: Fail hudson build with groovy script. You can get access to the upstream job and fail its build BUT... be careful with the fact that Hudson/Jenkins post-build actions right now do not allow to specify any ordering: if your groovy script is specified besides other post-build actions, and those actions affect the result of the build (i.e.: parsing of test results), then you won't be able to update the status of the upstream job if Jenkins decides to run them after your groovy script.

Under Build step configure Trigger/Call builds on other projects, choose the downstream job. Select "Block until triggered project finish their build". Save the default settings under it. This settings will make upstream job failed is downstream is failed.

Related

Gradle sync in IntelliJ triggers tasks defined in build.gradle - why, and how to prevent that? [duplicate]

So I am sure this is something very dumb mistake, but I need your help since I am not a gradle expert.
TASK:
read versionCode from file add +1 to it and save it back.
task executeOrderSixtySix {
def versionPropsFile = file('versionCodes.properties')
if (versionPropsFile.canRead()) {
def Properties versionProps = new Properties()
versionProps.load(new FileInputStream(versionPropsFile))
def versionNumber = versionProps['DEV_VERSION'].toInteger() + 1
versionProps['DEV_VERSION'] = versionNumber.toString()
versionProps.store(versionPropsFile.newWriter(), null)
// 'assembleDebug'
} else {
throw new GradleException("Nyeeeh on versionCodes.properties!")
}}
So when I have to do an internal drop I would like to run this task first, increase the devVersion number by 1 and then run the 'assemble' task to build all artifacts.
PROBLEM:
This task executes itself, even if I just sync the cradle file causing increased versionCode all the time.
I don't want to increase the versionCode during sync, development build only just for QAdrop, when I also have to assemble every APK.
Could you please help me out and tell me why is this task getting called/executed and who can I prevent it?
You need a doLast block inside of your task block. build.gradle file is a configuration script so it reads as declare the task when on configuration and declare the action on the execution.
Anything done in the task either before or after the doLast block would be run during configuration time. The code in the doLast block itself runs at execution time.
task executeOrderSixtySix {
doLast {
def versionPropsFile = file('versionCodes.properties')
if (versionPropsFile.canRead()) {
def Properties versionProps = new Properties()
versionProps.load(new FileInputStream(versionPropsFile))
def versionNumber = versionProps['DEV_VERSION'].toInteger() + 1
versionProps['DEV_VERSION'] = versionNumber.toString()
versionProps.store(versionPropsFile.newWriter(), null)
// 'assembleDebug'
} else {
throw new GradleException("Nyeeeh on versionCodes.properties!")
}
}
}
Ref: https://www.oreilly.com/learning/write-your-own-custom-tasks-in-gradle

Quality Gate does not fail when conditions for success are not met

I have established Quality Gate for my Jenkins project via SonarQube. One of my projects have no tests at all, so in the analysis I see that the code coverage is 0%. By the quality gate rules (<60% coverage = fail) my pipeline should return an error. However, this does not happen. The quality gate says that the analysis was a success and quality gate is 'OK'. In another project, I removed some tests to make coverage be <60% and the quality gate passed once again, even though it was meant to fail.
I had an error related to the analysis always returning 0% coverage before, but managed to fix it (with help from this link). Found a lot of articles with the similar questions but with no answers on any of them. This post looks promising but I cannot find the suitable alternative to its suggestion.
It is worth mentioning that the analysis stage is done in parallel with another stage (to save some time). The Quality Gate stage comes shortly afterwards.
The relevant code I use to initialise the analysis for my project is (org.jacoco... bit is the solution to 0% coverage error I mentioned above):
sh "mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent verify sonar:sonar -Dsonar.host.url=${env.SONAR_HOST_URL} -Dsonar.login=${env.SONAR_AUTH_TOKEN} -Dsonar.projectKey=${projectName} -Dsonar.projectName=${projectName} -Dsonar.sources=. -Dsonar.java.binaries=**/* -Dsonar.language=java -Dsonar.exclusions=$PROJECT_DIR/src/test/java/** -f ./$PROJECT_DIR/pom.xml"
The full quality gate code (to clarify how my quality gate starts and finishes):
stage("Quality Gate") {
steps {
timeout(time: 15, unit: 'MINUTES') { // If analysis takes longer than indicated time, then build will be aborted
withSonarQubeEnv('ResearchTech SonarQube'){
script{
// Workaround code, since we cannot have a global webhook
def reportFilePath = "target/sonar/report-task.txt"
def reportTaskFileExists = fileExists "${reportFilePath}"
if (reportTaskFileExists) {
def taskProps = readProperties file: "${reportFilePath}"
def authString = "${env.SONAR_AUTH_TOKEN}"
def taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
def taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus == "SUCCESS") {
echo "Background tasks are completed"
} else {
while (true) {
sleep 10
taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus != "IN_PROGRESS" && taskStatus != "PENDING") {
break;
}
}
}
} else {
error "Haven't found report-task.txt."
}
def qg = waitForQualityGate() // Waiting for analysis to be completed
if(qg.status != 'OK'){ // If quality gate was not met, then present error
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
}
What is shown in the SonarQube UI for the project? Does it show that the quality gate failed, or not?
I don't quite understand what you're doing in that pipeline script. It sure looks like you're calling "waitForQualityGate()" twice, but only checking for error on the second call. I use scripted pipeline, so I know it would look slightly different.
Update:
Based on your additional comment, if the SonarQube UI says that it passed the quality gate, then that means there's nothing wrong with your pipeline code (at least with respect to the quality gate). The problem will be in the definition of your quality gate.
However, I would also point out one other error in how you're checking for the background task results.
The possible values of "taskStatus" are "SUCCESS", "ERROR", "PENDING", and "IN_PROGRESS". If you need to determine whether the task is still running, you have to check for either of the last two values. If you need to determine whether the task is complete, you need to check for either of the first two values. You're checking for completion, but you're only checking for "SUCCESS". That means if the task failed, which it would if the quality gate failed (which isn't happening here), you would continue to wait for it until you timed out.

How to run conditional task in Airflow with previous http operator requested value

I am creating a dag file, with multiple SimpleHttpOperator request. I need to skipped the next task if previous task returned a failed status. Only continue with success status.
Tried with BranchPythonOperator, which inside i will decide which task to run next. But seem it is not working.
sample of request_info will return
{
"data":{
"name":"Allan",
"age":"26",
"gender":"male",
"country":"California"
},
"status":"failed"
}
request_info = SimpleHttpOperator(
task_id='get_info',
endpoint='get/information',
http_conn_id='localhost',
data=({"guest":"1"})
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
update_info = SimpleHttpOperator(
task_id='update_info',
endpoint='update/information',
http_conn_id='localhost',
data=("{{ti.xcom_pull(task_ids='request_info')}}")
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
skipped_task = DummyOperator(
task_id='skipped',
dag=dag
)
skip_task = BranchPythonOperator(
task_id='skip_task',
python_callable=next_task,
dag=dag
)
def next_task(**kwangs):
status="ti.xcom_pull(task_ids='request_info')"
if status == "success":
return "update_info"
else:
return "skipped_task"
return "skipped_task"
request_info.set_downstream(skip_task)
#need set down stream base on ststus
I expect the flow should be, after getting the info. Identify status, if success, proceed update else proceed skipped.
Generally tasks are supposed to be atomic, which means that they operate independently of one another (besides their order of execution). You can share more complex relations and dependencies by using XCom and airflow trigger rules.

PR preview analysis causes waitForQualityGate to fail

Today i encountered an anomaly when setting up a Jenkins pipeline script which checks both the quality gate
and annotates pull requests with any new issues.
Our setup contains of:
SonarQube 6.2
BitBucket Stash
Jenkins 2 (with 2 slaves)
AmadeusITGroup stash plugin
Part of the pipeline script:
node(node_label) {
stage("SonarQube analysis") {
withSonarQubeEnv('SonarQube') {
def sonarQubeCommand = "org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar " +
"-Dsonar.host.url=https://sonar-url " +
"-Dsonar.login=sonarqube " +
"-Dsonar.password=token " +
"-Dsonar.language=java " +
"-Dsonar.sources=. " +
"-Dsonar.inclusions=**/src/main/java/**/*"
if (pr.id != '') {
sonarQubeCommand = sonarQubeCommand +
" -Dsonar.analysis.mode=preview" +
" -Dsonar.stash.notification=true " +
" -Dsonar.stash.project=" + pr.project_key +
" -Dsonar.stash.repository=" + pr.repository_slug +
" -Dsonar.stash.pullrequest.id=" + pr.id +
" -Dsonar.stash.password=token"
}
pipeline.mvn(sonarQubeCommand)
}
}
}
stage("Check Quality Gate") {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
waitUntil {
// Sometimes an analysis will get the status PENDING meaning it still needs to be analysed.
if (qg.status == 'PENDING') {
qg = waitForQualityGate()
return false
} else {
return true
}
}
node(node_label) {
if (qg.status != 'OK') {
bitbucket.comment(pr, "_${env.JOB_NAME}#${env.BUILD_NUMBER}:_ **[✖ BUILD FAILURE](${build_url})**")
bitbucket.approve(pr, false)
pipeline.cleanWorkspace()
error "Pipeline aborted due to quality gate failure: ${qg.status}"
} else {
bitbucket.comment(pr, "_${env.JOB_NAME}#${env.BUILD_NUMBER}:_ **[✔ BUILD SUCCESS](${build_url})**")
bitbucket.approve(pr, true)
}
}
}
}
Btw: Without the waitUntil the pipeline failed because the task status was PENDING in SonarQube.
So the example in the SonarSource blog didn't quite work for me.
Now for the details how this pipeline fails:
When using sonar.analysis.mode=preview as parameter on the maven command
the jenkins job log will not contain the SonarQube analysis task id.
This will result in a failure in the pipeline script on command waitForQualityGate.
The message reads:
Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
As soon as i remove the sonar.analysis.mode=preview parameter the jenkins log reads a line like:[INFO] More about the report processing at https://sonar-url/api/ce/task?id=AVyHXjcsesZZZhqzzCSf
This line makes the waitForQualityGate command succeed normally.
However, this has an unwanted side effect besides the polution of the project in SonarQube with PR results.
The side effect is that when an issue was added in the pull request this won't be reported on the pull request in stash.
It always reports zero new issues and this is clearly wrong.
As it's not a preview analysis anymore i can see the new issue on the SonarQube server.
So somehow i have to make a choice now between having pull requests annotated with new issues or
checking the quality gate.
Obviously i would like to do both.
I have chosen to let pull request annotation work properly and skip the check on the quality gate for now.
Question remains am i doing something wrong here or do i have to wait for new versions of scanner and/or stash plugin to have this resolved?
At sonar's server go to:
Administration > Configuration > General Settings > Webhooks:
Name: Jenkins or something like that
URL: http://127.0.0.1:8080/sonarqube-webhook/
Where, URL is Jenkins host.
Cause:
By default webkooks on sonar are https, and my jenkins was working over http.

Jenkins delete builds older than latest 20 builds for all jobs

I am in the process of cleaning up Jenkins (it was setup incorrectly) and I need to delete builds that are older than the latest 20 builds for every job.
Is there any way to automate this using a script or something?
I found many solutions to delete certain builds for specific jobs, but I can't seem to find anything for all jobs at once.
Any help is much appreciated.
You can use the Jenkins Script Console to iterate through all jobs, get a list of the N most recent and perform some action on the others.
import jenkins.model.Jenkins
import hudson.model.Job
MAX_BUILDS = 20
for (job in Jenkins.instance.items) {
println job.name
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
// build.delete()
}
}
}
The Jenkins Script Console is a great tool for administrative maintenance like this and there's often an existing script that does something similar to what you want.
I got an issue No such property: builds for class: com.cloudbees.hudson.plugins.folder.Folder on Folders Plugin 6.6 while running #Dave Bacher's script
Alter it to use functional api
import jenkins.model.Jenkins
import hudson.model.Job
MAX_BUILDS = 5
Jenkins.instance.getAllItems(Job.class).each { job ->
println job.name
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
build.delete()
}
}
}
There are lots of ways to do this
Personally I would use the 'discard old builds' in the job config
If you have lots of jobs you could use the CLI to step through all the jobs to add it
Alternatively there is the configuration slicing plugin which will also do this for you on a large scale
For Multibranch Pipelines, I modified the script by Dave Bacher a bit. Use this to delete builds older than the latest 20 build of "master" branches:
MAX_BUILDS = 20
for (job in Jenkins.instance.items) {
if(job instanceof jenkins.branch.MultiBranchProject) {
job = job.getJob("master")
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
// build.delete()
}
}
}
}
This can be done in many ways. You can try the following
get all your job names in a textfile by going to the jobs location in jenkins and run the following
ls >jobs.txt
Now you can write a shell script with a for loop
#!/bin/bash
##read the jobs.txt
for i in 'cat <pathtojobs.txt>'
do
curl -X POST http://jenkins-host.tld:8080/jenkins/job/$i/[1-9]*/doDeleteAll
done
the above deletes all the jobs
you can also refer here for more answers
I had issues running the suggestions on my Jenkins instance. It could be because it is dockerized. In any case, removing the folder beforehand using the underlying bash interpreter fixes the issue. I also modified the script to keep 180 days of build logs and keep a minimum of 7 build logs:
import jenkins.model.Jenkins
import hudson.model.Job
MIN_BUILD_LOGS = 7
def sixMonthsAgo = new Date() - 180
Jenkins.instance.getAllItems(Job.class).each { job ->
println job.getFullDisplayName()
def recent = job.builds.limit(MIN_BUILD_LOGS)
def buildsToDelete = job.builds.findAll {
!recent.contains(it) && ! (it.getTime() > sixMonthsAgo)
}
if (!buildsToDelete) {
println "nothing to do"
}
for (build in buildsToDelete) {
println "Preparing to delete: " + build + build.getTime()
["bash", "-c", "rm -r " + build.getRootDir()].execute()
build.delete()
}
}
"done"

Resources