When i am fetcheing data from powershell script to Jenkins I am getting below error - jenkins-pipeline

pipeline{
agent none
stages{
stage('Powershell Data'){
agent{label 'data'}
steps{
script{
OUTPUT = bat(returnStdout: true, script : '#powershell -File C:\\Users\\kreddy\\Powershell_Scripts\\PowerShell.ps1').replaceAll(~/\n/, '').replace('-','').trim();
echo
}
}
}
}
}

Related

Jenkins declarative when condition to check if a variable is NULL

I want to skip Build stage if AMI already exists using declarative syntax.
stage('Build') {
environment {
AMI = sh(returnStdout: true, script: 'aws ec2 describe-images').trim()
}
when {
expression { AMI = null }
}
steps {
sh 'packer build base.json -machine-readable'
}
}
But when I'm running this pipeline I get groovy.lang.MissingPropertyException: No such property: AMI for class: groovy.lang.Binding
At the same time scripted pipeline works perfectly fine
stage('Build') {
steps {
script {
env.AMI = sh(returnStdout: true, script: 'aws ec2 describe-images').trim()
if (env.AMI == '') {
sh 'packer build base.json -machine-readable'
}
}
}
}
}
I'd really love to switch to the declarative pipelines just stuck with this condition. Any help is really appreciated. Thanks
I tried a lot things without any luck
when {
expression {
return AMI.isEmpty()
}
}
when {
not {
expression {
AMI == ''
}
}
when {
not {
expression { env.AMI }
}
}
Nothing works. I suspect it is somehow related to env variable association through sh
You can do something like this.
pipeline {
agent any
stages {
stage('Build') {
when {
expression {
return isAMIAvailable()
}
}
steps {
sh 'packer build base.json -machine-readable'
}
}
}
}
def isAMIAvailable() {
AMI = sh(returnStdout: true, script: 'aws ec2 describe-images').trim()
return AMI == null
}
With #ycr help I was able to build it!
Just in case here is the whole thing
pipeline {
agent any
environment {
ENV = 'dev'
}
stages {
stage('Build') {
when { not { expression { return isAMIAvailable() } } }
steps {
sh 'packer build base.json -machine-readable'
}
}
}
}
def isAMIAvailable() {
AMI = sh(returnStdout: true, script: "aws ec2 describe-images --owners self --filters 'Name=name,Values=base-${ENV}-1' --query 'Images[*].[Name]' --output text").trim()
if (AMI == '') {
return AMI == null
}
return AMI
}

Jenkins pipeline - Upon error go to postFailure block

Folks,
I have the below Jenkins pipeline. I want to run all the stages, hence I got catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') in place.
I have a script minion_notification.sh which produces file called "jk_buildpass.txt". Now, I want my build to got to postFailure block whenever condition if [ ! -s jk_buildpass.txt ] is matched. With exit 1 the job is not failing and the consequent stages are getting executed. Any idea?
pipeline = {
ansiColor('xterm') {
FILEVERSION = params.FILEVERSION.trim()
def remote = [:]
remote.name = 'xxx'
remote.host = '10.xxx.xx.x'
remote.allowAnyHosts = true
withCredentials([usernamePassword(credentialsId: 'saltmaster', passwordVariable: 'password', usernameVariable: 'ops')]) {
remote.user = 'ops'
remote.password = password
stage (' Backup') {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh './minion_notification.sh "${FILEVERSION}" "Validate" "backupdb"'
sh(returnStdout: true, script: '''#!/bin/bash
if [ ! -s jk_buildpass.txt ];then
exit 1
else
echo "jk_buildpass is not empty"
fi
'''.stripIndent())
}
}
stage (' Uninstall') {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh './minion_notification.sh "${FILEVERSION}" "Validate" "IRMUninstall"'
sh(returnStdout: true, script: '''#!/bin/bash
if [ ! -s jk_buildpass.txt ];then
exit 1
else
echo "jk_buildpass is not empty"
fi
'''.stripIndent())
}
}
stage('Run DDMU') {
sh './minion_notification.sh "${FILEVERSION}" "jenkins-display"'
}
}
// sh './slk_notify.sh "Upgrade successfully completed for - ${FILEVERSION} " "*********** " "good" ":jenkins:"
}
}
postFailure = {
sh './slk_notify.sh " Upgarde failed for - ${FILEVERSION} " "danger" ":jenkinserror:"'
}
postAlways = {
echo 'Cleaning Workspace now'
env.WORKSPACE = pwd()
//sh "rm ${env.WORKSPACE}/* -fr"
}
node{
properties([
parameters([
string(name: 'FILEVERSION', defaultValue: '', description: 'Enter the file name to be process ')
])
])
try {
pipeline()
} catch (e) {
postFailure()
throw e
} finally {
postAlways()
}
}

Jenkins copy files error after docker npm build

I have a script similar to this:
pipeline {
agent {
docker {
label 'dev'
image 'node:12-alpine'
args '-p 3000:3000'
}
}
environment {
HOME = '.'
}
stages {
stage('clone repo') {
steps {
git(
url: '...',
credentialsId: '...',
branch: 'master'
)
}
}
stage('install dependency packages') {
steps {
sh 'npm install'
}
}
stage('build prod ready enviroment') {
steps {
sh 'npm run build'
}
}
stage('deploy') {
agent { node { label 'dev' } }
steps {
sh "cp -rf ./build/* /opt/www_folder/"
}
}
}
}
Now everything works fine except deploy stage which just hangs up building process. If I run only last stage (deploy) separately without other stages it works fine. I think there is a conflict with a docker agent but I don't know how to fix it.
I'm not sure if it is the best answer but I managed to fix my problem with this script:
pipeline {
agent none
environment {
HOME = '.'
}
stages {
stage('clone repo') {
agent { node { label 'dev' } }
steps {
git(
url: '...',
credentialsId: '...',
branch: 'master'
)
}
}
stage('install and build') {
agent {
docker {
label 'dev'
image 'node:12-alpine'
args '-p 3000:3000'
}
}
steps {
sh 'npm install'
sh 'npm run build'
}
}
stage('deploy') {
agent { node { label 'dev' } }
steps {
sh "rm -rf /opt/www_folder/*"
sh "cp -rf ./build/* /opt/www_folder/"
}
}
}
}

How to pass paramaters to remote script in a Jenkins pipeline

I am working on a declarative jenkins pipeline where I am trying to login to remote host and execute shell script. Below is the sample snippet. I want to know How to pass a parameter to my script.sh script.
#!/bin/bash
echo "argument $1"
below is the pipeline script
hosts = [“x.x.x”, “x.x.x”]
pipeline {
agent { node { label 'docker' } }
parameters {
choice(name: 'stageparam', choices: ['build', 'deploy'], description: ‘xyz’)
string(name: 'Username', defaultValue: ‘abc’, description: 'enter username')
}
stages {
stage('Setup') {
steps {
script {
pom = getPom(effective: false)
}
}
}
stage('Deploy') {
steps {
script {
def targetServers = null
if (stageparam == "deploy") {
targetServers = hosts
}
targetServers.each { server ->
echo "Server : ${server}"
def remote = [:]
remote.name = ‘server’
remote.host = server
remote.user = Username
def pass = passwordParameter description: "Enter password for user ${remote.user} "
remote.password = pass
remote.allowAnyHosts = true
stage('Remote SSH') {
sshPut remote: remote, from: ‘./script.sh', into: '.'
sshScript remote: remote, script: "doc.sh ${Username}"
}
}
}
}
}
}
}
getting below error while executing the script
/home/jenkins/workspace/script.sh Username does not exist.
I have written a simple pipeline script to show the usage of variable in bash script.
Pipeline script:
pipeline {
agent any
parameters {
choice(name: 'stageparam', choices: ['build', 'deploy'], description: 'enter stageparam')
string(name: 'USERNAME', defaultValue: 'abc', description: 'enter username')
}
stages {
stage('Git Pull')
{
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-test', url: 'http://localhost:8076/test-upgrade.git']]])
}
}
stage('Run the python script') {
steps {
sh 'chmod 777 test.py'
sh "python test.py ${env.WORKSPACE}"
}
}
stage('Run the bash script') {
steps {
sh 'chmod 777 run.sh'
sh "./run.sh ${env.USERNAME}"
}
}
}
}
Output:

Template docker agent in jenkins Declarative pipeline

I have a Jenkinsfile for a declarative pipeline that uses docker agents. A number of the steps use a docker agent and it is a bit repetitive adding the same agent for these steps
e.g.
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
//Do something
}
}
stage('Stage 2') {
agent {
docker { image 'jenkins/jenkins-slave:latest'
reuseNode true
registryUrl 'https://some.registry/'
registryCredentialsId 'git'
args '-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group -e HOME=${WORKSPACE}'
}
}
steps {
//Do something
}
}
stage('Stage 3') {
steps {
//Do something
}
}
stage('Stage 4') {
agent {
docker { image 'jenkins/jenkins-slave:latest'
reuseNode true
registryUrl 'https://some.registry/'
registryCredentialsId 'git'
args '-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group -e HOME=${WORKSPACE}'
}
}
steps {
//Do something
}
}
}
}
Is there any way to template the agent (or write my own) so that I could do something like the following
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
//Do something
}
}
stage('Stage 2') {
agent {
my-docker
}
steps {
//Do something
}
}
stage('Stage 3') {
steps {
//Do something
}
}
stage('Stage 4') {
agent {
my-docker
}
steps {
//Do something
}
}
}
}
So that I do not have to repeatly write the same agent and possibly I can reuse it in all my Dockerfiles

Resources