How to use Validating String Parameter Plugin in the Jenkins declarative pipeline code? - jenkins-pipeline

I'm curious as to if it is possible to define the String Validator Plugin in a Jenkins declarative pipeline code? I already have a working setup defined via job UI, but my intention is to put everything in the pipeline defined as:
string(name='', ......).
Unfortunately, all examples on the web are explaining how to set up the validation in the UI, which I already have. Or is it one of those plugins that is not supported in a pipeline model?

This plugin can be used as a validatingString parameter in the declarative pipeline code.
pipeline {
agent any
parameters {
validatingString(name: "test", defaultValue: "", regex: /^abc-[0-9]+$/, failedValidationMessage: "Validation failed!", description: "ABC")
}
stages {
stage("Test") {
steps {
echo "${params.test}"
}
}
}
}
Keep in mind, that the first time you will run your pipeline after adding this code, the parameter won't show up - it will be added during the first run of the pipeline. After that you will see the parameter in the pipeline UI:
And when you run the parameterized pipeline, the validation will be applied:

I am not sure why, but when I wrapped each of my arguments into a named key, I was able to bypass this error I received:
So this will give you javaposse.jobdsl.dsl.helpers.BuildParametersContext.validatingString() is applicable for argument types Error:
validatingString (
"EMAIL_VALIDATED"
'defaultEmail'
'someregex',
'somevalidationfailuremessage',
'Use your email'
)
However, this worked:
validatingString {
name("EMAIL_VALIDATED")
defaultValue('defaultEmail')
regex('someregex')
failedValidationMessage('somevalidationfailuremessage')
description('Use your email')
}

Related

adding jenkins pipeline triggers on agent node

i am trying to adding trigger to my pipeline file
pipeline {
agent {
node {
label 'Deploymentserver'
triggers {
cron('H 09 * * 1-5')
}
}
}
This code gives the error:
WorkflowScript: 22: Invalid config option "triggers" for agent type "node". Valid config options are [label, customWorkspace] # line 22, column 11.
triggers {
Then i tried to put it outside the agent asuming i wont work but just to test
pipeline {
agent {
node {
label 'Deploymentserver'
}
}
triggers {
cron('H 09 * * 1-5')
}
It doesn't give any errors, but it dont trigger my pipeline either.
It seems that trigger option is not support in agent node.
it is a declarative pipeline integrated with bitbucket. How can i get this to work.
You second attempt is the correct syntax.
As you can see in the Documentation the correct location for the triggers as at the same level of the agent directive:
pipeline {
agent {
label 'Deploymentserver'
}
triggers {
cron('H 09 * * 1-5')
}
stages {
...
}
...
}
Therefore the configuration is not the issue and should work as expected.
One reason that is might causing you issues is that you must run the pipeline at least once (manual or automated) after adding the trigger configuration in order for the configuration to take effect.
You can go into the job configuration in the Jenkins UI and validate you see there the cron trigger settings, if so your pipeline trigger is configured properly.

parameterized-remote-trigger is throwing 405 exception

I am trying to trigger a job A(this is configured as trigger remote) remotely from another job B, and job B needs to hold until results come back to show success or failure, I initially tried using rest API using curl command, it perfectly works.here's the curl code:
curl -v -X POST 'https://xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters?config_files=./jenkins/unit-tests.json' --user xxxx:110f4dfa33ba8f8ef5d8d299beb6aa1543
I choose parameterized plugin code which installed on Jenkins server because it handles the polling mechanism internally and also has handler friendly methods. please see below code for remoteJob, but it fails with 405 error, that means method not allowed in HTTP language, looks like plugin is using GET method instead of post. I added an option for logging , but it does not seems to be showing more log.
def handle = triggerRemoteJob(
remoteJenkinsName: 'remote-master',
job: 'https://xxx.xxx.com/xxx-xxx/job/xxx/job/master/buildWithParameters',
remoteJenkinsUrl: 'https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters',
auth: TokenAuth(apiToken: hudson.util.Secret.fromString('110f4dfa33ba8f8ef5d8d299beb6aa1543'), userName: 'xxxx'),
parameters: 'config_files=./jenkins/unit-tests')
I am getting following error -
[Pipeline] triggerRemoteJob
##########################################################################
Parameterized Remote Trigger Configuration:
- job: https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters
- remoteJenkinsUrl: https://xxx.xxx.xxx/xxx-xxx/job/ius/job/master/buildWithParameters
- auth: 'Token Authentication' as user 'sseri'
- parameters: [config_files=./jenkins/unit-tests]
- blockBuildUntilComplete: true
- connectionRetryLimit: 5
- trustAllCertificates: false
##########################################################################
Connection to remote server failed [405], waiting to retry - 10 seconds until next attempt. URL: https://xxx.xxx.xxx/xxx-xxx/job/xxx/job/master/buildWithParameters/api/json, parameters:
Retry attempt #1 out of 5
Please help me in this regard!
I am not sure about the plugins you are using, but it's quite simple to implement this scenario "call a downstream job from upstream and fail upstream if the downstream fail" without any plugins.
Take a look at my example below.
let say if you have 2 jobs called jobA and jobB and your goal is to call jobB from jobA and fail the jobA if jobB fail.
**Scripted Pipeline for jobA **
node() {
try {
def jobB = build(job: jobName,parameters: [string(name:"parameterName",value: "parameterValue")])
def jobBStatus = jobB.getResult()
if(jobBStatus == "failed") {
throw new RuntimeException("Downstream job-b failed with reason ...");
}
...
}catch(Exception e) {
throw e
}
}
Declarative Pipeline for jobA
pipeline {
agent any;
stages {
stage('call jobB') {
steps {
script {
def jobB = build(job: jobName,parameters: [
string(name:"parameterName",value: "parameterValue")
])
def jobBStatus = jobB.getResult()
if(jobBStatus == "failed") {
error("Downstream job-b failed with reason ...")
}
}
}
}
}
}
Try using this Parameterized-Remote-Trigger-Plugin. It should give you what you want. I'm having some problems configuring it using authentication tokens and users using Jenkinsfile but if you are using the GUI im sure you will get the job done.

No such DSL method 'steps' found among steps

I'm trying to post to a Slack channel whenever CI fails using a groovy script. But however when I try to implement this inside failure block I'm getting this error
Error when executing failure post condition:
java.lang.NoSuchMethodError: No such DSL method 'steps' found among steps [archive, bat, build, catchError, checkout, deleteDir, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, envVarsForTool, error, fileExists, getContext, git, input, isUnix, junit, library, libraryResource, load, lock, mail, milestone, node, parallel
However, I was able to apply this same code to send Slack notifications in other pipelines under stages blocks. Looks as if it's having issues when applied to post block.
post {
always {
cleanWs()
}
failure {
steps {
slackSend baseUrl: 'https://hooks.slack.com/services/',
channel: '#build-failures',
iconEmoji: '',
message: "CI failing for - #${env.BRANCH_NAME} - ${currentBuild.currentResult} (<${env.BUILD_URL}|Open>)",
teamDomain: 'differentau',
tokenCredentialId: 'slack-token-build-failures',
username: ''
}
}
}
this should work:
post {
always {
cleanWs()
}
failure {
slackSend baseUrl: 'https://hooks.slack.com/services/',
channel: '#build-failures',
iconEmoji: '',
message: "CI failing for - #${env.BRANCH_NAME} - ${currentBuild.currentResult} (<${env.BUILD_URL}|Open>)",
teamDomain: 'differentau',
tokenCredentialId: 'slack-token-build-failures',
username: ''
}
}

How to ask user input only when branch matches master in Jenkinsfile pipeline?

I tried to use the following Jenkinsfile to ask input only on master branch and can't get the groovy grammer pass validation:
pipeline {
stage('Deploy') {
when {
branch 'master'
}
steps {
input(message: "Please input", parameters: [string(name: "VERSION", defaultValue="", description="")]
}
}
}
The error is:
java.lang.IllegalArgumentException: Expected named arguments but got [{name=VERSION, description=""}, null]
I searched a lot but didn't find a single one example about using input step in the Jenkinsfile with parameters.
Can anyone shed some light on this? Thanks in advance!
Finally find a way to do this, we should wrap the input with the script keyword.
stage('Deploy to k8s prod') {
when {
branch 'release-prod'
}
steps {
script {
env.VERSION = input message: '请输入参数', ok: '确定', parameters: [
string(name:'VERSION', description: "版本号")]
}
sh "echo ${env.VERSION}"
}
}
}

Grouping post conditions in a Jenkins declarative pipeline

Is there a way to group post conditions in a Jenkins declarative pipeline ?
For instance, I want to do the same thing for statuses aborted failure and success.
Is there a shorter way to do it than the following ?
post {
aborted { sendNotification(currentBuild.result, "$LIST_NOTIFICATION_JENKINS")
failure { sendNotification(currentBuild.result, "$LIST_NOTIFICATION_JENKINS")
success { sendNotification(currentBuild.result, "$LIST_NOTIFICATION_JENKINS")
}
There is the 'always' condition:
post {
always {sendNotification(currentBuild.result, "$LIST_NOTIFICATION_JENKINS")}
}
The 'always' condition will run regardless of the stage result.
See the documentation on the post section.
If you want a set of common actions between just a few conditions, for example if you wanted to do the same thing for failure and aborted, I would recommend creating a function in your script to call from the failure and aborted post conditions.
You can also do something like the following:
always {
script{
if (currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE")
{
echo "was aborted or failed"
}
}
}

Resources