I have a parametrized job Dummy that works as expected. Then I have multiple jobs that call job A with a specific sets of parameters to run (job B for example).
Let's say job Dummy script is as follows:
def jobLabel = "dummy-" + env.JOB_BASE_NAME.replace('/', '-').replace(' ', '_') + "${PARAM}"
currentBuild.displayName = "Dummy ${PARAM}"
echo "Previous result: " + currentBuild.previousBuild.result
if (currentBuild.previousBuild.result.equals("SUCCESS")) {
error("Build failed because of this and that..")
} else {
echo "Dummy ${PARAM}!"
}
And job Test script as follows:
// in this array we'll place the jobs that we wish to run
def branches = [:]
def environments = [
'US',
'EU',
'AU'
]
environments.each { env ->
branches["Dummy Tests " + env]= {
def result = build job: 'Dummy', parameters: [
string(name:'PARAM', value: env)
]
echo "${result.getResult()}"
if (result.getResult().equals("SUCCESS")) {
echo "Success! " + env
} else if (result.getResult().equals("FAILURE")) {
echo "Failure! " + env
}
}
}
parallel branches
The previous job result is whatever ran last the previous time. I would like to somehow have the job history based on the parameter so I can detect when a specific job combination switches from failure to success and vice versa for notification purposes. I guess you could iterate the history for that but sounds too complicated for something that hopefully is common requirement. Any hints or ideas?
Related
I have been reading as many posts as possible about this topic but none of them suggest working solutions for me, so, throwing it again to the community:
In a Jenkinsfile pipeline I have
steps {
(...)
sh script: '''
$pkgname #existing var
export report_filename=$pkgname'_report.txt'
(stuff is being written to the $report_filename file...)
'''
}
post {
always {
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.report_filename, encoding: 'utf-8').trim()
buildDescription(report)
}
}
}
I don't manage to pass the value of the report_filename bash var on to the post > always > script section. Tried ${env.report_filename} (with/without single/double quotes), with/without env. and some other crazy things.
What am I doing wrong here?
Thanks.
May by it little bit not right.
create variable def var
use options returnStdout: true. And parse output. var = sh ( script " echo #existing var", returnStdout: true).split("\n")
use var[0] in stage readFile(file: var[0]...)
If u can use env, add:
environment {
VAR = sh (script " echo #existing var", returnStdout: true).split("\n") [0]
}
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.VAR , encoding: 'utf-8').trim()
buildDescription(report)
}
I don't see why you don't simply declare the variables in Groovy right at the start.
I'm not too familiar with the language, and don't currently have a way to test this; but something like this:
def pkgname = "gunk"
def report_filename = "${pkgname}_report.txt"
steps {
(...)
sh script: """
# use triple double quotes so that Groovy variables are interpolated
# $pkgname #syntax error, take it out
(stuff is being written to the $report_filename file...)
"""
}
post {
always {
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.report_filename, encoding: 'utf-8').trim()
buildDescription(report)
}
}
}
I am creating a dag file, with multiple SimpleHttpOperator request. I need to skipped the next task if previous task returned a failed status. Only continue with success status.
Tried with BranchPythonOperator, which inside i will decide which task to run next. But seem it is not working.
sample of request_info will return
{
"data":{
"name":"Allan",
"age":"26",
"gender":"male",
"country":"California"
},
"status":"failed"
}
request_info = SimpleHttpOperator(
task_id='get_info',
endpoint='get/information',
http_conn_id='localhost',
data=({"guest":"1"})
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
update_info = SimpleHttpOperator(
task_id='update_info',
endpoint='update/information',
http_conn_id='localhost',
data=("{{ti.xcom_pull(task_ids='request_info')}}")
headers={"Content-Type":"application/json"},
xcom_push=True,
dag=dag
)
skipped_task = DummyOperator(
task_id='skipped',
dag=dag
)
skip_task = BranchPythonOperator(
task_id='skip_task',
python_callable=next_task,
dag=dag
)
def next_task(**kwangs):
status="ti.xcom_pull(task_ids='request_info')"
if status == "success":
return "update_info"
else:
return "skipped_task"
return "skipped_task"
request_info.set_downstream(skip_task)
#need set down stream base on ststus
I expect the flow should be, after getting the info. Identify status, if success, proceed update else proceed skipped.
Generally tasks are supposed to be atomic, which means that they operate independently of one another (besides their order of execution). You can share more complex relations and dependencies by using XCom and airflow trigger rules.
I am in the process of cleaning up Jenkins (it was setup incorrectly) and I need to delete builds that are older than the latest 20 builds for every job.
Is there any way to automate this using a script or something?
I found many solutions to delete certain builds for specific jobs, but I can't seem to find anything for all jobs at once.
Any help is much appreciated.
You can use the Jenkins Script Console to iterate through all jobs, get a list of the N most recent and perform some action on the others.
import jenkins.model.Jenkins
import hudson.model.Job
MAX_BUILDS = 20
for (job in Jenkins.instance.items) {
println job.name
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
// build.delete()
}
}
}
The Jenkins Script Console is a great tool for administrative maintenance like this and there's often an existing script that does something similar to what you want.
I got an issue No such property: builds for class: com.cloudbees.hudson.plugins.folder.Folder on Folders Plugin 6.6 while running #Dave Bacher's script
Alter it to use functional api
import jenkins.model.Jenkins
import hudson.model.Job
MAX_BUILDS = 5
Jenkins.instance.getAllItems(Job.class).each { job ->
println job.name
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
build.delete()
}
}
}
There are lots of ways to do this
Personally I would use the 'discard old builds' in the job config
If you have lots of jobs you could use the CLI to step through all the jobs to add it
Alternatively there is the configuration slicing plugin which will also do this for you on a large scale
For Multibranch Pipelines, I modified the script by Dave Bacher a bit. Use this to delete builds older than the latest 20 build of "master" branches:
MAX_BUILDS = 20
for (job in Jenkins.instance.items) {
if(job instanceof jenkins.branch.MultiBranchProject) {
job = job.getJob("master")
def recent = job.builds.limit(MAX_BUILDS)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
// build.delete()
}
}
}
}
This can be done in many ways. You can try the following
get all your job names in a textfile by going to the jobs location in jenkins and run the following
ls >jobs.txt
Now you can write a shell script with a for loop
#!/bin/bash
##read the jobs.txt
for i in 'cat <pathtojobs.txt>'
do
curl -X POST http://jenkins-host.tld:8080/jenkins/job/$i/[1-9]*/doDeleteAll
done
the above deletes all the jobs
you can also refer here for more answers
I had issues running the suggestions on my Jenkins instance. It could be because it is dockerized. In any case, removing the folder beforehand using the underlying bash interpreter fixes the issue. I also modified the script to keep 180 days of build logs and keep a minimum of 7 build logs:
import jenkins.model.Jenkins
import hudson.model.Job
MIN_BUILD_LOGS = 7
def sixMonthsAgo = new Date() - 180
Jenkins.instance.getAllItems(Job.class).each { job ->
println job.getFullDisplayName()
def recent = job.builds.limit(MIN_BUILD_LOGS)
def buildsToDelete = job.builds.findAll {
!recent.contains(it) && ! (it.getTime() > sixMonthsAgo)
}
if (!buildsToDelete) {
println "nothing to do"
}
for (build in buildsToDelete) {
println "Preparing to delete: " + build + build.getTime()
["bash", "-c", "rm -r " + build.getRootDir()].execute()
build.delete()
}
}
"done"
I have M tasks to process and N parallel processing resources available (think worker threads on Heroko, or EC2 instances), where M >> N.
I could roll my own system, but it seems likely there's already a debugged package or gem for this: what do you recommend? (Now that I think about it, I could torture Delayed::Job into doing this.)
The tasks can be written just about any language -- even a shell script will do the job. The 'mother ship' is Ruby On Rails with a PostgreSQL database. The basic idea is that when a resource is ready to process a task, it asks the mother ship for the next un-processed task in the queue and starts processing it. If the job fails, it is re-tried a few times before giving up. The results can go into flat files or be written into the PostgreSQL database.
(And, no, this is not for generating spam. I'm researching degree distribution of several large social networks.)
I think it is a delayed_job https://github.com/collectiveidea/delayed_job or resque https://github.com/defunkt/resque job, as you said.
This would be rolling your own, but but if your parallel task are not resource intensive, it is a reasonably quick solution. On the other hand, if they are resource intensive, you'll want to implement something much more robust.
You could start each thread with Process::fork (if the process is in ruby), or Process::exec, or Process::spawn (if the process is in something else). Then use Process::waitall for the sub-processes to complete.
Below, I used a Hash to hold the functions themselves as well as the PID's. This could definitely be improved on.
# define the sub-processes
sleep_2_fail = lambda { sleep 2; exit -1; }
sleep_2_pass = lambda { sleep 2; exit 0; }
sleep_1_pass = lambda { sleep 1; exit 0; }
sleep_3_fail = lambda { sleep 3; exit -1; }
# use a hash to store the lambda's and their PID's
sub_processes = Hash.new
# add the sub_processes to the hash
# key = PID
# value = lambda (can use to be re-called later on)
sub_processes.merge! ({ Process::fork { sleep_2_fail.call } => sleep_2_fail })
sub_processes.merge! ({ Process::fork { sleep_2_pass.call } => sleep_2_pass })
sub_processes.merge! ({ Process::fork { sleep_1_pass.call } => sleep_1_pass })
sub_processes.merge! ({ Process::fork { sleep_3_fail.call } => sleep_3_fail })
# starting time of the loop
start = Time.now
# use a while loop to wait at most 10 seconds or until
# the results are empty (no sub-processes)
while ((results = Process.waitall).count > 0 && Time.now - start < 10) do
results.each do |pid, status|
if status != 0
# again add the { PID => lambda } to the hash
sub_processes.merge! ( { Process::fork { sub_processes[pid].call } => sub_processes[pid] } )
end
# delete the original entry
sub_processes.delete pid
end
end
The ruby-doc on waitall is helpful.
It sounds like you want a job processor. Look at Gearman http://gearman.org/
Fairly language agnostic.
And here's the ruby Gem info http://gearmanhq.com/help/tutorials/ruby/getting_started/
I am using Parameterized Trigger Plugin to trigger a downstream build.
How do I specify that my upstream job should fail if the downstream fails? The upstream job is actually is dummy job with parameters being passed to the downstream.
Make sure you are using the correct step to execute your downstream jobs; I discovered that since I was executing mine as a "post build step", I didn't have the "Block until the triggered projects finish their builds" option. Changing that to "build task" as opposed to a "post build task", allowed me to find the options you are looking for within the Parameterized Trigger Plugin.
this code will mark the upstream build unstable/failed based on downstream job status.
/*************************************************
Description: This script needs to put in Groovy
Postbuild plugin of Jenkins as a Post Build task.
*************************************************/
import hudson.model.*
void log(msg) {
manager.listener.logger.println(msg)
}
def failRecursivelyUsingCauses(cause) {
if (cause.class.toString().contains("UpstreamCause")) {
def projectName = cause.upstreamProject
def number = cause.upstreamBuild
upstreamJob = hudson.model.Hudson.instance.getItem(projectName)
if(upstreamJob) {
upbuild = upstreamJob.getBuildByNumber(number)
if(upbuild) {
log("Setting to '" + manager.build.result + "' for Project: " + projectName + " | Build # " + number)
//upbuild.setResult(hudson.model.Result.UNSTABLE)
upbuild.setResult(manager.build.result);
upbuild.save()
// fail other builds
for (upCause in cause.upstreamCauses) {
failRecursivelyUsingCauses(upCause)
}
}
} else {
log("No Upstream job found for " + projectName);
}
}
}
if(manager.build.result.isWorseOrEqualTo(hudson.model.Result.UNSTABLE)) {
log("****************************************");
log("Must mark upstream builds fail/unstable");
def thr = Thread.currentThread()
def build = thr.executable
def c = build.getAction(CauseAction.class).getCauses()
log("Current Build Status: " + manager.build.result);
for (cause in c) {
failRecursivelyUsingCauses(cause)
}
log("****************************************");
}
else {
log("Current build status is: Success - Not changing Upstream build status");
}
Have a look at the following response: Fail hudson build with groovy script. You can get access to the upstream job and fail its build BUT... be careful with the fact that Hudson/Jenkins post-build actions right now do not allow to specify any ordering: if your groovy script is specified besides other post-build actions, and those actions affect the result of the build (i.e.: parsing of test results), then you won't be able to update the status of the upstream job if Jenkins decides to run them after your groovy script.
Under Build step configure Trigger/Call builds on other projects, choose the downstream job. Select "Block until triggered project finish their build". Save the default settings under it. This settings will make upstream job failed is downstream is failed.