pass value from job2 to job1 jenkins - bash

I have a Jenkins job1 which triggers job2
stage("trigger job2") {
steps {
build job: job2,
parameters: [
string(name: "test1", value: "test1"),
string(name: "test2", value: "test2"),
],
and job2 triggered and running, I need to know if job2's last stage fails; for example, in my case, the last stage is "RESULT". Now, if the RESULT stage on job2 is red/failed, return this one pass to job1 and in job1 stage trigger job2 should also display in red.
I tried these cases but they do not work.
Jenkins version is Jenkins 2.346.1
What I tried:
JOB1: modified
def p = build job: job2, propagate: true,
parameters: [
string(name: "test1", value: "test1"),
string(name: "test2", value: "test2"),
],
script {
build.waitForCompletion()
}
if (p.result == 'FAILURE') {
currentBuild.result = "FAILURE"
error 'Job2 failed, marking Job1 as failed in the current stage'
}
}
Try 2:
JOB1 modified
def result = build.waitForCompletion()
if (result.result == 'FAILURE') {
currentBuild.result = "FAILURE"
error 'Job2 failed, marking Job1 as failed in the current stage'
}
I tried 3-4 different cases, but they didn't work, could someone advise me?

I would say it is the default behaviour in Jenkins.
Jenkins job fails be default if any of its stages fail. Job2 will fail if its last stage fails. As far as I know and what doc says it is enough to write this:
build job: job2,
parameters: [
string(name: "test1", value: "test1"),
string(name: "test2", value: "test2"),
]
to make job1 fail when job2 fails. Stage which triggers job2 (in job1) should be red as you want.

Related

environmentDashboard-Jenkinsfile

I have installed the following plugin in Jenkins environmentDashboard(https://plugins.jenkins.io/environment-dashboard/)
Need to incorporate the feature in Jenkinsfile, Sample Jenkinsfile
pipeline {
agent any
parameters {
string(name: "branch_name", defaultValue: "master", description: "Enter the branch name")
string(name: "project_name", defaultValue: "TestAutomation" , description:"BuildWorkspace")
choice(name: "mach_name", choices: ["LINUX" , "WINDOWS"], description: "OS Selection")
string(name: "build_suffix", defaultValue:"", description: "Provide a build suffix name")
}
environment{
TARBALL_PATH = ""
}
Wrappers{
environmentDashboard {
environmentName('Testing')
componentName('GCC_COMPILER')
buildNumber({BUILD_ID})
buildJob({JOB_NAME})
packageName('TARBALL')
addColumns(true)
columns('RESULT', 'PASS')
}
}
stages {
stage('TARBALL GENERATION') {
steps {
script{
buildName "${mach_name}#${BUILD_NUMBER}"
buildDescription "${mach_name} Test Execution on ${NODE_NAME}"
}
echo "Branch name is ${params.branch_name}"
echo "Preferred MACH: ${params.mach_name}"
echo "Reached Here after printing selected parameters"
sh """
#!/bin/bash
echo "Multiline shell steps execution"
printenv | sort
"""
} //steps
} //stage
}//Stages
}//pipeline
Getting the error as follows:
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 13: Undefined section "Wrappers" # line 13, column 5.
Wrappers{
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:127)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:561)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:522)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:327)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:427)
Finished: FAILURE
You need to write wrappers in lowercase.
Like wrappers{}
Not Wrappers{}
For pipeline job you need to use the following code:
environmentDashboard(addColumns: false, buildJob: '', buildNumber: 'Version-1', componentName: 'WebApp-1', data: [], nameOfEnv: 'Environment-1', packageName: '') {
// some block
}
In fact, you can skip // some block and just use {}. That worked for me.

How to trigger a multiple run in a single pipeline job of jenkins?

I have a pipeline job which run with below pipeline groovy script,
pipeline {
parameters{
string(name: 'Unique_Number', defaultValue: '', description: 'Enter Unique Number')
}
stages {
stage('Build') {
agent { node { label 'Build' } }
steps {
script {
sh build.sh
}
}
stage('Deploy') {
agent { node { label 'Deploy' } }
steps {
script {
sh deploy.sh
}
}
stage('Test') {
agent { node { label 'Test' } }
steps {
script {
sh test.sh
}
}
}
}
I just trigger this job multiple times with different unique ID number as input parameter. So as a result i will have multiple run/build for this job at different stages.
With this, i need to trigger a multiple run/build to be promote to next stage (i.e., from build to deploy or from deploy to test) in this pipeline job as a one single build instead of triggering each and every single run/build to next stage. Is there any possibility?
I was also trying to do the same thing and found no relevant answers. May this help to someone.
This will read a file that contains the Jenkins Job name and run them iteratively from one single job.
Please change below code accordingly in your Jenkins.
pipeline {
agent any
stages {
stage('Hello') {
steps {
script{
git branch: 'Your Branch name', credentialsId: 'Your crendiatails', url: ' Your BitBucket Repo URL '
##To read file from workspace which will contain the Jenkins Job Name ###
def filePath = readFile "${WORKSPACE}/ Your File Location"
##To read file line by line ###
def lines = filePath.readLines()
##To iterate and run Jenkins Jobs one by one ####
for (line in lines) {
build(job: "$line/branchName",
parameters:
[string(name: 'vertical', value: "${params.vertical}"),
string(name: 'environment', value: "${params.environment}"),
string(name: 'branch', value: "${params.aerdevops_branch}"),
string(name: 'project', value: "${params.host_project}")
]
)
}
}
}
}
}
}
You can start multiple jobs from one pipeline if you run something as:
build job:"One", wait: false
build job:"Two", wait: false
Your main job starts children pipelines and children pipelines will run in parallel.
You can read PipeLine Build Step documentation for more information.
Also, you can read about the parallel run in declarative pipeline
Here you can find a lot of examples for parallel running

How to access jenkinsfile local environment variables from post steps?

I'm attempting to add a post-build step to my jenkinsfile to apply a label to a build. The Jenkins variables are resolved fine, but the environment variables I've defined globally within the file are not resolved. Here's the relevant bits:
pipeline {
environment {
VERSION_MAJOR = '1'
VERSION_MINOR = '0'
VERSION_PATCH = '0'
}
stages {
stage('Build') {
steps {
echo('Testing p4Tag')
}
}
}
post {
success {
build job: 'Copy-Installers',
parameters: [
//
string(name: 'MASTER_VERSION_MAJOR', value: '${VERSION_MAJOR}'),
string(name: 'MASTER_VERSION_MINOR', value: '${VERSION_MINOR}'),
string(name: 'MASTER_VERSION_FEATURE', value: '${VERSION_PATCH}'),
string(name: 'MASTER_BUILD_NUMBER', value: '${BUILD_NUMBER}'),
string(name: 'COMPONENT_NAME', value: 'Console')
}
}
}
The downstream build fails. Looking at the log it appears the local environment variables are not passed in:
The Build Number name is: ${VERSION_MAJOR}
The Build Number name is: ${VERSION_MINOR}
The Build Number name is: ${VERSION_PATCH}
The Build Number name is: 924
The Build environment name is: Console
Update
I've tried the following variations as well:
value: '${env.VERSION_MAJOR}' - Fails similarly to above
value: '$VERSION_MAJOR' - Fails similarly to above
value: $VERSION_MAJOR - Fails with "no such property" error
Update 2
For what it's worth, these variables can be referenced from the stage section. For instance, this works as expected:
stage('build') {
sh('./Build/build-package.sh $VERSION_MAJOR $VERSION_MINOR $VERSION_PATCH $BUILD_NUMBER')
}

Jenkins declarative pipeline - User input parameters

I've looked for some example of user input parameters using Jenkins declarative pipeline, however all the examples are using the scripted pipelines. Here is a sample of code I'm trying to get working:
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
input id: 'test', message: 'Hello', parameters: [string(defaultValue: '', description: '', name: 'myparam')]
sh "echo ${env}"
}
}
}
}
I can't seem to work out how I can access the myparam variable, it would be great if someone could help me out.
Thanks
When using input, it is very important to use agent none on the global pipeline level, and assign agents to individual stages. Put the input procedures in a separate stage that also uses agent none. If you allocate an agent node for the input stage, that agent executor will remain reserved by this build until a user continues or aborts the build process.
This example should help with using the Input:
def approvalMap // collect data from approval step
pipeline {
agent none
stages {
stage('Stage 1') {
agent none
steps {
timeout(60) { // timeout waiting for input after 60 minutes
script {
// capture the approval details in approvalMap.
approvalMap = input
id: 'test',
message: 'Hello',
ok: 'Proceed?',
parameters: [
choice(
choices: 'apple\npear\norange',
description: 'Select a fruit for this build',
name: 'FRUIT'
),
string(
defaultValue: '',
description: '',
name: 'myparam'
)
],
submitter: 'user1,user2,group1',
submitterParameter: 'APPROVER'
}
}
}
}
stage('Stage 2') {
agent any
steps {
// print the details gathered from the approval
echo "This build was approved by: ${approvalMap['APPROVER']}"
echo "This build is brought to you today by the fruit: ${approvalMap['FRUIT']}"
echo "This is myparam: ${approvalMap['myparam']}"
}
}
}
}
When the input function returns, if it only has a single parameter to return, it returns that value directly. If there are multiple parameters in the input, it returns a map (hash, dictionary), of the values. To capture this value we have to drop to groovy scripting.
It is good practice to wrap your input code in a timeout step so that build don't remain in an unresolved state for an extended time.

How to define jenkins build trigger in jenkinsfile to start build after other job

I would like to define a build trigger in my Jenkinsfile. I know how to do it for the BuildDiscarderProperty:
properties([[$class: 'jenkins.model.BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '50', artifactNumToKeepStr: '20']]])
How can I set the Build Trigger that starts the job, when another project has been built. I cannot find a suitable entry in the Java API docs.
Edit:
My solution is to use the following code:
stage('Build Agent'){
if (env.BRANCH_NAME == 'develop') {
try {
// try to start subsequent job, but don't wait for it to finish
build job: '../Agent/develop', wait: false
} catch(Exception ex) {
echo "An error occurred while building the agent."
}
}
if (env.BRANCH_NAME == 'master') {
// start subsequent job and wait for it to finish
build '../Agent/master', wait: true
}
}
I just looked for the same thing and found this Jenkinsfilein jenkins-infra/jenkins.io
In short:
properties([
pipelineTriggers([cron('H/30 * * * *')])
])
This is an example:
#Project test1
pipeline {
agent {
any
}
stages {
stage('hello') {
steps {
container('dind') {
sh """
echo "Hello world!"
"""
}
}
}
}
post {
success{
build propagate: false, job: 'test2'
}
}
}
post {} will be execute when project test1 is built and the code inside
success {} will only be executed when project test1 is successful.
build propagate: false, job: 'test2' will call project test2.
propogate: false ensures that project test1 does not wait for project test2's
completion and simply invokes it.

Resources