Jenkins parameterized matrix job - matrix

I am in the process of setting up a Jenkins job to run a bunch of tests on some c++ code. The code is generated during one Jenkins job. There are a number of sub-projects, with their code in their own folders.
My thought is to have a matrix job where each configuration runs the test on one folder of code files. There are two things that I am not sure the best way to do though...
I would like to set up the matrix job to automatically pick up if more sub-folders are added. Something like passing a list of folders to the job as a parameter, and have that parameter used as the axis for the job.
I would like the test to not be run on a specific folder unless some of the code in that folder was changed by the parent job.
Right now how to set up this test is completely open- I am trolling for ideas. If you have ever set up something like this- how did you do it?

I had similar task - running a matrix job with variable number of folders as one axis. The folders were in version control but could easily be artifact. What I've done, is create two jobs, one main and normal, the other slave and matrix. Here is the code that needs to be run as elevated groovy in the main job:
import hudson.model.*
def currentBuild = Thread.currentThread().executable;
def jobName = 'SlaveMatrixJob' // Name of the matrix job to configure
def axisFolders = []
def strings =""
// Get the matrix job
def job = hudson.model.Hudson.instance.getItem(jobName)
assert job != null, "The job $jobName could not be found"
// Check it is a matrix job
assert job.getClass() == hudson.matrix.MatrixProject.class, "The job $jobName is of class '${job.getClass().name}', but expecting 'hudson.matrix.MatrixProject'"
// Get the folders
new File("C:\\Path\\Path").eachDirMatch ~/_test.*/, {it ->
println "Got folder: ${it.name}"
axisFolders << it.name
}
// Check if the array is empty
assert !axisFolders.isEmpty(), "No folders found to set in the matrix, aborting"
//Sort them
axisFolders.sort()
// Now set new axis list for test folders
def newAxisList = new hudson.matrix.AxisList()
newAxisList.add(new hudson.matrix.TextAxis('TEST_FOLDERS', axisFolders))
job.setAxes(newAxisList)
println "Matrix Job $jobName new axis list: ${job.getAxes().toString()}"
What this does basically is get all the folde in c:\path\path starting with _test and then inserting them in the SlaveMatrixJob parameter named TEST_FOLDERS.
I had to go with two jobs, since I was not able to make this dynamic update work without installing additional plugins, which was not possible at the time.
For the second point, you could add logic to the script to check if the folders have been updated since the last build and skip the ones that weren't. Or you could search for some plugins, but my advice is go with the script for simpler tasks.

Related

How to get the current catalog task stage or determine the next stage?

In ServiceNow on the Workflow Editor, I have a Catalog Task with the Stage defined as "Processing" and an Advanced script to do some pre-work. When I call current.stage it gives me the stage for the previous state as the stage isn't set to the task's defined stage value until the task is completed.
Current stages in order:
Processing, Assignment, Assessment, Closure.
Example:
On the "Assignment" task when you call the current.stage
in the Advanced script it will return "Processing".
How do I get the stage for the task so if I'm on "Assignment" that I can get "Assignment"?
I had the idea of querying the wf_stage table to determine the next stage but when I call current.workflow_version it's not defined.
I wonder that in the workflow, since the position of activity where you call current.stage is fixed and won't change once decided, why not just using the hardcoded stage name, without calling current.stage or other functions?
Also I have an another idea, just for your reference.
How about putting an array containing all stages into the scratchpad in a Run Script at the beginning of the workflow?
workflow.scratchpad.myStages = ['Process', 'Assignment', 'Assessment', 'Closure'];
And then you could get the next stage by searching the array afterwards.
var myStages = workflow.scratchpad.myStages;
var currentStage = myStages[myStages.indexOf(current.stage) + 1];

assign variables directly to build in jenkins

I am new to Jenkins, i have a wrapper script that runs overnight in Jenkins.
This wrapper script takes input from a .CSV file which contains list of projects. i had to give this way = ./wrapper_script project.csv
This has one problem i.e., it runs all the projects in one single build, but my requirement is i should run one build per project. I have already installed necessary plugins.
How can i give project.csv content as input to the build where i will trigger the wrapper_script.sh directly
Have a look on Job DSL Plugin. You could create a seed-job that reads the CSV file and iterates over the records and creates a job for each record. If you need a more detailed code example, please include sample data from your CSV file.
Ok. Given that the CSV you provided it so simple, you could skip using the CSV library. Your Job DSL seed job would be something like this:
new File('project.csv').splitEachLine(',') { fields ->
job(fields[0]) {
steps {
shell("your build command " + fields[1])
}
}

jenkins-pipeline load scoping "method code too large"

I'm setting up a pretty complex pipeline to handle legacy builds.
There are currently 8 stages, and more on the way - perhaps total of 12-15 stages.
Each stage does pretty similar actions:
- take a list, and for each item
- create entry in map that
- allocates a node, and executes a set of BAT scripts (yes, windows)
and then run list in parallel
The current pipeline is about 1,000 lines long, and I'm getting the "method too large" error
I'm in process of refactoring this DSL into separate load-able scripts.
So far, so good.
But I just ran a test that indicates loading a script is additive to the overall pipeline. So I'd like to learn what is best to do here.
Test:
base.groovy:
def myVar //wchi is global (to basRef, i thought)
def setTest() { myVar='abc' }
def getTest() { return myVar }
pipeline.groovy
stage('one') {
def basRef = load('base.groovy')
basRef.setTest()
echo basRef.getTest()
}
stage('two') {
def basRef = load('base.groovy')
echo basRef.getTest()
}
stage one shows "abc" as expected.
stage two ALSO SHOWS "abc"
My ask:
How do I know using loadable files will not result in "method too large"?
What is the scope of a loadable file?
I've tried setting basRef = null to allow garbage collection to work, but I'm not sure it does.
Thanks for any guidance on this.

Trying to loop through an array and pass each value into a gradle task

I'm trying to do a series of things which should be quite simple, but are causing me a lot of pain. At a high level, I want to loop through an array and pass each value into a gradle task which should return an array of its own. I then want to use this array to set some Jenkins config.
I have tried a number of ways of making this work, but here is my current set-up:
project.ext.currentItemEvaluated = "microservice-1"
task getSnapshotDependencies {
def item = currentItemEvaluated
def snapshotDependencies = []
//this does a load of stuff like looping through gradle dependencies,
//which means this really needs to be a gradle task rather than a
//function etc. It eventually populates the snapshotDependencies array.
return snapshotDependencies
}
jenkins {
jobs {
def items = getItems() //returns an array of projects to loop through
items.each { item ->
"${item}-build" {
project.ext.currentItemEvaluated = item
def dependencies = project.getSnapshotDependencies
dsl {
configure configureLog()
//set some config here using the returned dependencies array
}
}
}
}
I can't really change how the jenkins block is set-up as it's already well matured, so I need to work within that structure if possible.
I have tried numerous ways of trying to pass a variable into the task - here I am using a project variable. The problem seems to be that the task evaluates before the jenkins block, and I can't work out how to properly evaluate the task again with the newly set currentItemEvaluated variable.
Any ideas on what else I can try?
After some more research, I think the problem here is that there is no concept of 'calling a task' in Gradle. Gradle tasks are just a graph of tasks and their dependencies, so they will compile in an order that only adheres to those dependencies.
I eventually had to solve this problem without trying to call a Gradle task (I have a build task printing the relevant data to a file, and my jenkins block reads from the file)
See here

Jenkins - passing variables between jobs?

I have two jobs in jenkins, both of which need the same parameter.
How can I run the first job with a parameter so that when it triggers the second job, the same parameter is used?
You can use Parameterized Trigger Plugin which will let you pass parameters from one task to another.
You need also add this parameter you passed from upstream in downstream.
1.Post-Build Actions > Select ”Trigger parameterized build on other projects”
2.Enter the environment variable with value.Value can also be Jenkins Build Parameters.
Detailed steps can be seen here :-
https://itisatechiesworld.wordpress.com/jenkins-related-articles/jenkins-configuration/jenkins-passing-a-parameter-from-one-job-to-another/
Hope it's helpful :)
The accepted answer here does not work for my use case. I needed to be able to dynamically create parameters in one job and pass them into another. As Mark McKenna mentions there is seemingly no way to export a variable from a shell build step to the post build actions.
I achieved a workaround using the Parameterized Trigger Plugin by writing the values to a file and using that file as the parameters to import via 'Add post-build action' -> 'Trigger parameterized build...' then selecting 'Add Parameters' -> 'Parameters from properties file'.
I think the answer above needs some update:
I was trying to create a dynamic directory to store my upstream build artifacts so I wanted to pass my upstream job build number to downstream job I tried the above steps but couldn't make it work. Here is how it worked:
I copied the artifacts from my current job using copy artifacts plugin.
In post build action of upstream job I added the variable like "SOURCE_BUILD_NUMBER=${BUILD_NUMBER}" and configured it to trigger the downstream job.
Everything worked except that my downstream job was not able to get $SOURCE_BUILD_NUMBER to create the directory.
So I found out that to use this variable I have to define the same variable in down stream job as a parameter variable like in this picture below:
This is because the new version of jenkins require's you to define the variable in the downstream job as well. I hope it's helpful.
(for fellow googlers)
If you are building a serious pipeline with the Build Flow Plugin, you can pass parameters between jobs with the DSL like this :
Supposing an available string parameter "CVS_TAG", in order to pass it to other jobs :
build("pipeline_begin", CVS_TAG: params['CVS_TAG'])
parallel (
// will be scheduled in parallel.
{ build("pipeline_static_analysis", CVS_TAG: params['CVS_TAG']) },
{ build("pipeline_nonreg", CVS_TAG: params['CVS_TAG']) }
)
// will be triggered after previous jobs complete
build("pipeline_end", CVS_TAG: params['CVS_TAG'])
Hint for displaying available variables / params :
// output values
out.println '------------------------------------'
out.println 'Triggered Parameters Map:'
out.println params
out.println '------------------------------------'
out.println 'Build Object Properties:'
build.properties.each { out.println "$it.key -> $it.value" }
out.println '------------------------------------'
Just add my answer in addition to Nigel Kirby's as I can't comment yet:
In order to pass a dynamically created parameter, you can also export the variable in 'Execute Shell' tile and then pass it through 'Trigger parameterized build on other projects' => 'Predefined parameters" => give 'YOUR_VAR=$YOUR_VAR'. My team use this feature to pass npm package version from build job to deployment jobs
UPDATE: above only works for Jenkins injected parameters, parameter created from shell still need to use same method. eg. echo YOUR_VAR=${YOUR_VAR} > variable.properties and pass that file downstream
I faced the same issue when I had to pass a pom version to a downstream Rundeck job.
What I did, was using parameters injection via a properties file as such:
1) Creating properties in properties file via shell :
Build actions:
Execute a shell script
Inject environment variables
E.g : properties definition
2) Passing defined properties to the downstream job :
Post Build Actions :
Trigger parameterized build on other project
Add parameters : Current build parameters
Add parameters : predefined parameters
E.g : properties sending
3) It was then possible to use $POM_VERSION as such in the downstream Rundeck job.
/!\ Jenkins Version : 1.636
/!\ For some reason when creating the triggered build, it was necessary to add the option 'Current build parameters' to pass the properties.
Reading through the answers, I don't see another option that I like so will offer it as well. I love the parameterization of jobs, but it doesn't always scale well. If you have jobs which are not directly downstream of the first job but farther down the pipeline, you don't really want to parameterize every job in the pipeline so as to be able to pass the parameters all the way through. Or if you have a large number of parameters used by a variety of other jobs (especially those not necessarily tied to one parent or master job), again parameterization doesn't work.
In these cases, I favor outputting the values to a properties file and then injecting that in whatever job I need using the EnvInject plugin. This can be done dynamically, which is another way to solve the issue from another answer above where parameterized jobs were still used. This solution scales very well in many scenarios.
This could be done via groovy function:
upstream Jenkinsfile - param CREDENTIALS_ID is passed downsteam
pipeline {
stage {
steps {
build job: "my_downsteam_job_name",
parameters [string(name: 'CREDENTIALS_ID', value: 'other_credentials_id')]
}
}
}
downstream Jenkinsfile - if param CREDENTIALS_ID not passed from upsteam, function returns default value
def getCredentialsId() {
if(params.CREDENTIALS_ID) {
return params.CREDENTIALS_ID;
} else {
return "default_credentials_id";
}
}
pipeline {
environment{
TEST_PASSWORD = credentials("${getCredentialsId()}")
}
}
You can use Hudson Groovy builder to do this.
First Job in pipeline
Second job in pipeline
I figured it out!
With almost 2 hours worth of trial and error, i figured it out.
This WORKS and is what you do to pass variables to remote job:
def handle = triggerRemoteJob(remoteJenkinsName: 'remoteJenkins', job: 'RemoteJob' paramters: "param1=${env.PARAM1}\nparam2=${env.param2}")
Use \n to separate two parameters, no spaces..
As opposed to
parameters: '''someparams'''
we use
paramters: "someparams"
the " ... " is what gets us the values of the desired variables. (These are double quotes, not two single quotes)
the ''' ... ''' or ' ... ' will not get us those values. (Three single quotes or just single quotes)
All parameters here are defined in environment{} block at the start of the pipeline and are modified in stages>steps>scripts wherever necessary.
I also tested and found that when you use " ... " you cannot use something like ''' ... "..." ''' or "... '..'..." or any combination of it...
The catch here is that when you are using "..." in parameters section, you cannot pass a string parameter; for example This WILL NOT WORK:
def handle = triggerRemoteJob(remoteJenkinsName: 'remoteJenkins', job: 'RemoteJob' paramters: "param1=${env.PARAM1}\nparam2='param2'")
if you want to pass something like the one above, you will need to set an environment variable param2='param2' and then use ${env.param2} in the parameters section of remote trigger plugin step
You can also make a job write into a properties file somewhere and have another job read it. One of the way to do that is to inject variables via EnvInject plugin.

Resources