adding jenkins pipeline triggers on agent node - jenkins-pipeline

i am trying to adding trigger to my pipeline file
pipeline {
agent {
node {
label 'Deploymentserver'
triggers {
cron('H 09 * * 1-5')
}
}
}
This code gives the error:
WorkflowScript: 22: Invalid config option "triggers" for agent type "node". Valid config options are [label, customWorkspace] # line 22, column 11.
triggers {
Then i tried to put it outside the agent asuming i wont work but just to test
pipeline {
agent {
node {
label 'Deploymentserver'
}
}
triggers {
cron('H 09 * * 1-5')
}
It doesn't give any errors, but it dont trigger my pipeline either.
It seems that trigger option is not support in agent node.
it is a declarative pipeline integrated with bitbucket. How can i get this to work.

You second attempt is the correct syntax.
As you can see in the Documentation the correct location for the triggers as at the same level of the agent directive:
pipeline {
agent {
label 'Deploymentserver'
}
triggers {
cron('H 09 * * 1-5')
}
stages {
...
}
...
}
Therefore the configuration is not the issue and should work as expected.
One reason that is might causing you issues is that you must run the pipeline at least once (manual or automated) after adding the trigger configuration in order for the configuration to take effect.
You can go into the job configuration in the Jenkins UI and validate you see there the cron trigger settings, if so your pipeline trigger is configured properly.

Related

Jenkins function of shared library executed on wrong node

I have a simple shared library and want to call a function which creates a file.
My pipeline script looks like this:
#Library('jenkins-shared-library')_
pipeline {
agent {
node {
label 'node1'
}
}
stages {
stage('test') {
steps {
script {
def host = sh(script: 'hostname', returnStdout: true).trim()
echo "Hostname is: ${host}"
def testlib = new TestLibrary(script:this)
testlib.createFile("/tmp/file1")
}
}
}
}
}
This pipeline job is triggered by another job which runs on the built-in master node.
The stage 'test' is correctly executed on the 'node1'.
Problem: The created file "/tmp/file1" is created on the jenkins master, instead of "node1"
I also tried it without the shared library and load a groovy script
directly in a step:
pipeline {
agent {
node {
label 'node1'
}
}
stages {
stage('test') {
steps {
script {
def script = load "/path/to/script.groovy"
script.createFile("/tmp/file1")
}
}
}
}
}
This also creates the file on the master node, instead on "node1".
Is there no way of loading external libs or classes and execute them on the node where the stage is running on ? I dont want to place all the code directly into the step.
Ok I found it myself, groovy scripts run by definition always on the master node. No way to run scripts or shared-library code on a node other than master.

Parallel Stages with parallel input message in Jenkins pipe line

I have the following pipeline:
pipeline {
agent any
stages {
stage('CI') {
parallel {
stage('גמלאות') {
steps {
input message: "Pass Sanity?"
}
}
stage('וועדות') {
steps {
input message: "Pass Sanity?"
}
}
stage('מבוטח') {
steps {
input message: "Pass Sanity?"
}
}
}
}
}
}
I'm expecting to see in the UI 3 option to select in each stage. but getting only one instead.
If I will look at tradinall console then I will be able to select if to proceed or to abort.
Is there a way to get this function thru the UI?
Here is an example of the UI:
It seems that the code is correct. The problem is with the Jenkins plugin.
I mooved to work with Blueocean plugin, and now it seems to work fine:

parameterized-remote-trigger is throwing 405 exception

I am trying to trigger a job A(this is configured as trigger remote) remotely from another job B, and job B needs to hold until results come back to show success or failure, I initially tried using rest API using curl command, it perfectly works.here's the curl code:
curl -v -X POST 'https://xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters?config_files=./jenkins/unit-tests.json' --user xxxx:110f4dfa33ba8f8ef5d8d299beb6aa1543
I choose parameterized plugin code which installed on Jenkins server because it handles the polling mechanism internally and also has handler friendly methods. please see below code for remoteJob, but it fails with 405 error, that means method not allowed in HTTP language, looks like plugin is using GET method instead of post. I added an option for logging , but it does not seems to be showing more log.
def handle = triggerRemoteJob(
remoteJenkinsName: 'remote-master',
job: 'https://xxx.xxx.com/xxx-xxx/job/xxx/job/master/buildWithParameters',
remoteJenkinsUrl: 'https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters',
auth: TokenAuth(apiToken: hudson.util.Secret.fromString('110f4dfa33ba8f8ef5d8d299beb6aa1543'), userName: 'xxxx'),
parameters: 'config_files=./jenkins/unit-tests')
I am getting following error -
[Pipeline] triggerRemoteJob
##########################################################################
Parameterized Remote Trigger Configuration:
- job: https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters
- remoteJenkinsUrl: https://xxx.xxx.xxx/xxx-xxx/job/ius/job/master/buildWithParameters
- auth: 'Token Authentication' as user 'sseri'
- parameters: [config_files=./jenkins/unit-tests]
- blockBuildUntilComplete: true
- connectionRetryLimit: 5
- trustAllCertificates: false
##########################################################################
Connection to remote server failed [405], waiting to retry - 10 seconds until next attempt. URL: https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters/api/json, parameters:
Retry attempt #1 out of 5
Please help me in this regard!
I am not sure about the plugins you are using, but it's quite simple to implement this scenario "call a downstream job from upstream and fail upstream if the downstream fail" without any plugins.
Take a look at my example below.
let say if you have 2 jobs called jobA and jobB and your goal is to call jobB from jobA and fail the jobA if jobB fail.
**Scripted Pipeline for jobA **
node() {
try {
def jobB = build(job: jobName,parameters: [string(name:"parameterName",value: "parameterValue")])
def jobBStatus = jobB.getResult()
if(jobBStatus == "failed") {
throw new RuntimeException("Downstream job-b failed with reason ...");
}
...
}catch(Exception e) {
throw e
}
}
Declarative Pipeline for jobA
pipeline {
agent any;
stages {
stage('call jobB') {
steps {
script {
def jobB = build(job: jobName,parameters: [
string(name:"parameterName",value: "parameterValue")
])
def jobBStatus = jobB.getResult()
if(jobBStatus == "failed") {
error("Downstream job-b failed with reason ...")
}
}
}
}
}
}
Try using this Parameterized-Remote-Trigger-Plugin. It should give you what you want. I'm having some problems configuring it using authentication tokens and users using Jenkinsfile but if you are using the GUI im sure you will get the job done.

How to use Validating String Parameter Plugin in the Jenkins declarative pipeline code?

I'm curious as to if it is possible to define the String Validator Plugin in a Jenkins declarative pipeline code? I already have a working setup defined via job UI, but my intention is to put everything in the pipeline defined as:
string(name='', ......).
Unfortunately, all examples on the web are explaining how to set up the validation in the UI, which I already have. Or is it one of those plugins that is not supported in a pipeline model?
This plugin can be used as a validatingString parameter in the declarative pipeline code.
pipeline {
agent any
parameters {
validatingString(name: "test", defaultValue: "", regex: /^abc-[0-9]+$/, failedValidationMessage: "Validation failed!", description: "ABC")
}
stages {
stage("Test") {
steps {
echo "${params.test}"
}
}
}
}
Keep in mind, that the first time you will run your pipeline after adding this code, the parameter won't show up - it will be added during the first run of the pipeline. After that you will see the parameter in the pipeline UI:
And when you run the parameterized pipeline, the validation will be applied:
I am not sure why, but when I wrapped each of my arguments into a named key, I was able to bypass this error I received:
So this will give you javaposse.jobdsl.dsl.helpers.BuildParametersContext.validatingString() is applicable for argument types Error:
validatingString (
"EMAIL_VALIDATED"
'defaultEmail'
'someregex',
'somevalidationfailuremessage',
'Use your email'
)
However, this worked:
validatingString {
name("EMAIL_VALIDATED")
defaultValue('defaultEmail')
regex('someregex')
failedValidationMessage('somevalidationfailuremessage')
description('Use your email')
}

Is there a way to move the entire post {} build section in Jenkinsfile to the global pipeline library?

I'm relatively new to Jenkins pipelines, but having implemented already a few, I've realised I need to start using jenkins shared library before I go mad.
Have already figured out how to define some repetitive steps in the library and call them with less clutter from Jenkinsfile, but not sure if the same thing can be done for the entire post build section (thought I've read about to how to define the entire pipeline in the lib and similar), as this is pretty much static end of every single pipeline code:
#Library('jenkins-shared-library')_
pipeline {
agent none
stages {
stage ('System Info') { agent any
steps { printSysInfo() }
}
stage ('Init'){ agent {label 'WinZipSE'}
steps { init('SCMroot') }
}
stage('Build') { agent any
steps { doMagic() }
}
}
// This entire 'post {}' section needs to go to a shared lib
// and be called just with a simple methed call, e.g.
// doPostBuild()
post {
always {
node ('master') {
googlechatnotification (
message: '[$BUILD_STATUS] Build $JOB_NAME $BUILD_NUMBER has finished',
url: 'id:credential_id_for_Ubuntu')
step (
[$class: 'Mailer',
recipients: 'sysadmins#example.com me#example.com',
notifyEveryUnstableBuild: true,
sendToIndividuals: true]
)
}
}
success {
node ('master') {
echo 'This will run only if successful'
}
}
failure {
node ('master') {
echo 'This will run only if failed'
}
}
// and so on
}
}
I just dunno how to syntactically achieve that. For sure, I can define the entire post build section an a lib/var like: doPotBuild.groovy
def call () {
post {...}
}
but how I will eventually call it from within my Jenkinsfile outside of that defined post {} build block section (AKA stages).
I can call it within some stage('post build){doPostBuild()}, but it won't serve the way how the true post {} section is supposed to work, e.g. it won't get executed it there was failure in one of the previous stages.
Any thoughts on that and mainly working examples?
I am not entirely if this will work as I don't use declarative pipelines, so am unsure how rigid the top level structure is. But I would revert to a script block.
#Library('jenkins-shared-library')_
pipeline {
agent none
stages {
stage ('System Info') { agent any
steps { printSysInfo() }
}
stage ('Init'){ agent {label 'WinZipSE'}
steps { init('SCMroot') }
}
stage('Build') { agent any
steps { doMagic() }
}
}
// This entire 'post {}' section needs to go to a shared lib
// and be called just with a simple methed call, e.g.
// doPostBuild()
script {
doPostBuild()
}
}

Resources