I had to create a jenkins job to automate certain tasks that will perform certain operations like Updating the public site, Changing public version to latest public release, Updating Software on public site and Restarting Server these include certain operations such as copy files to a tmp folder, log in to a an on-prem server, go to the folder and unzip the file etc.
I have created the jenkinsfile as follows:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
string(database: 'database', defaultValue: 'abc', description: 'Enter the database that needs to be created')
choice(name: 'Run', choices: '', description: 'Data migration')
}
agent {
node { label 'aws && build && linux && ubuntu' }
}
triggers {
pollSCM('H/5 * * * *')
}
stages {
stage('Clean & Clone') {
steps {
cleanWs()
checkout scm
}
}
stage('Updating the public site'){
steps{
sh "scp ./${filename}.zip <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "cp ./tmp/${filename}.zip ./projects/xyz/xyz-site/"
sh "cd ./projects/xyz/xyz-site/ "
sh "unzip ./${filename}.zip"
sh "cp -R ./${filename}/* ./"
}
stage('Changing public version to latest public release') {
steps {
sh "scp ./${filename}.sql.gz <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "mysql -u root -p<PASSWORD>"
sh "show databases;"
sh "create database ${params.database};"
sh "GRANT ALL PRIVILEGES ON <newdb>.* TO 'ixxyz'#'localhost' WITH GRANT OPTION;"
sh "exit;"
sh "zcat tmp/${filename}.sql.gz | mysql -u root -p<PASSWORD> <newdb>"
sh "db.default.url="jdbc:mysql://localhost:3306/<newdb>""
sh "ps aux|grep monitor.sh|awk '{print "kill "$2}' |bash"
}
}
stage('Updating Software on public site') {
steps {
sh "scp <user>#<server>:/tmp/abc<version>_empty_h2.zip"
sh "ssh <user>#<server name>"
sh "su <user>"
sh "mv tmp/<version>_empty_h2.zip ./xyz/projects/xyz"
sh "cd xyz/projects/xyz"
sh "cp latest/conf/local.conf <version>_empty_h2/conf/"
}
}
stage('Restarting Server') {
steps {
sh "rm latest/RUNNING_PID"
sh "bash reload.sh"
sh "nohup bash monitor.sh &"
}
}
}
}
Is there a way I can dynamically obtain the zip filename in the root folder? I used ${filename}.zip , but it doesn't seem to work.
Also, is there a better way to perform these operations using jenkins? Any help is much appreciated.
You could write all your steps in one shell script for each stage and execute under one stage.
Regarding filename.zipeither you can take this as a parameter and pass this value to your stages. OR You can also use find command as a shell command or shell script to find .zip files in a current directory. find <dir> -iname \*.zip find . -iname \*.zip .
Example:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
choice(name: 'Run', choices: '', description: 'Data migration')
}
stage('Updating the public site'){
steps{
sh "scp ./${params.filename}.zip <user>#<server name>:/tmp"
...
}
}
}
For executing script at a certain location based on your question , you could use dir with path where your scripts are placed.
OR you can also give the path directly sh label: 'execute script', script:"C:\\Data\\build.sh"
stage('Your stage name'){
steps{
script {
// Give path where your scripts are placed
dir ("C:\\Data") {
sh label: 'execute script', script:"build.sh <Your Arguments> "
...
}
}
}
}
Related
I have created a jenkins pipeline for an application. I have following stages in my declarative pipeline.
Checkout
nuget restore
sonar scan start
dotnet build
sonar scan end
build docker image
run container
deploy on google Kubernetes cluster
If I don't include 8th step my pipeline works fine, but if I include 8th step my pipeline works only the first time. For the next runs I will get the below error in the first stage.
I have created a windows machine on Azure and running Jenkins on that machine.
Jenkins file
stages {
stage('Code Checkout') {
steps {
echo 'Cloning project...'
deleteDir()
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/development']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/shailu0287/JenkinsTest.git']]]
echo 'Project cloned...'
}
}
stage('Nuget Restore') {
steps {
echo "nuget restore"
bat 'dotnet restore \"WebApplication4.sln\"'
}
}
stage('Sonar Scan Start'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "Sonar scan start"
echo "${scannerHome}"
bat "${scannerHome}\\SonarScanner.MSBuild.exe begin /k:\"Pan33r\" /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Build Solution') {
steps {
echo "Build Solution"
bat "\"${tool 'MSBUILD_Home'}\" WebApplication4.sln /p:Configuration=Release /p:Platform=\"Any CPU\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER}"
}
}
stage('Sonar Scan End'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "${scannerHome}"
echo "sonar scan end"
bat "${scannerHome}\\SonarScanner.MSBuild.exe end /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Building docker image') {
steps{
script {
echo "Building docker image"
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Containers'){
parallel{
stage("Run PreContainer Checks"){
environment{
containerID = "${bat(script: 'docker ps -a -q -f name="c-Shailendra-master"', returnStdout: true).trim().readLines().drop(1).join("")}"
}
steps{
script{
echo "Run PreContainer Checks"
echo env.containerName
echo "containerID is "
echo env.containerID
if(env.containerID != null){
echo "Stop container and remove from stopped container list too"
bat "docker stop ${env.containerID} && docker rm ${env.containerID}"
}
}
}
}
stage("Publish Docker Image to DockerHub"){
steps{
script {
echo "Pushing docker image to docker hub"
docker.withRegistry( '', registryCredential ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}
}
}
stage('Docker Deployment'){
steps{
echo "${registry}:${BUILD_NUMBER}"
echo "Docker Deployment by using docker hub's image"
bat "docker run -d -p 7200:80 --name c-${containerName}-master ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to GKE') {
steps{
echo "Deployment started ..."
step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'Kubernetes.yml', credentialsId: env.CREDENTIALS_ID, verify deployments: true])
}
}
}
}
If I remove the last step, all my builds work fine. If I include the last step, only the first build works fine then I have to restart the machine. I am not sure what is the issue with the YML file.
How to pass values to a shell script from jenkins during the runtime of the pipeline job.
I have a shell script and want to pass the values dynamically.
#!/usr/bin/env bash
....
/some code
....
export USER="" // <--- want to pass this value from pipeline
export password="" //<---possibly as a secret
The jenkins pipeline executes the above shell script
node('abc'){
stage('build'){
sh "cd .."
sh "./script.sh"
}
}
You can do something like the following:
pipeline {
agent any
environment {
USER_PASS_CREDS = credentials('user-pass')
}
stages {
stage('build') {
steps {
sh "cd .."
sh('./script.sh ${USER_PASS_CREDS_USR} ${USER_PASS_CREDS_PSW}')
}
}
}
}
The credentials is from using the Credentials API and Credentials plugin. Your other option is Credentials Binding plugin where it allows you to include credentials as part of a build step:
stage('build with creds') {
steps {
withCredentials([usernamePassword(credentialsId: 'user-pass', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
// available as an env variable, but will be masked if you try to print it out any which way
// note: single quotes prevent Groovy interpolation; expansion is by Bourne Shell, which is what you want
sh 'echo $PASSWORD'
// also available as a Groovy variable
echo USERNAME
// or inside double quotes for string interpolation
echo "username is $USERNAME"
sh('./script.sh $USERNAME $PASSWORD')
}
}
}
Hopefully this helps.
i am new to using jenkins and docker. Currently I ran into an error where my jenkinsfile doesnt have permission to docker.sock. Is there a way to fix this? Dried out of ideas
things i've tried:
-sudo usermod -aG docker $USER //usermod not found
-sudo setfacl --modify user:******:rw /var/run/docker.sock //setfacl not found
-chmod 777 /var/run/docker.sock //still receiving this error after reboot
-chown -R jenkins:jenkins /var/run/docker.sock //changing ownership of '/var/run/docker.sock': Operation not permitted
error image:
def gv
pipeline {
agent any
environment {
CI = 'true'
VERSION = "$BUILD_NUMBER"
PROJECT = "foodcore"
IMAGE = "$PROJECT:$VERSION"
}
tools {
nodejs "node"
'org.jenkinsci.plugins.docker.commons.tools.DockerTool' 'docker'
}
parameters {
choice(name: 'VERSION', choices: ['1.1.0', '1.2.0', '1.3.0'], description: '')
booleanParam(name: 'executeTests', defaultValue: true, description: '')
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
echo 'building client'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
echo 'building server...'
}
}
}
stage("build docker image") {
steps {
sh 'docker build -t $IMAGE .'
}
}
// stage("deploy") {
// steps {
// script {
// docker.withRegistry(ECURL, ECRCRED) {
// docker.image(IMAGE).push()
// }
// }
// }
// }
}
// post {
// always {
// sh "docker rmi $IMAGE | true"
// }
// }
}
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
My Pipeline is generating a dynamic recipient list based on each Job execution.I'm trying to use that list which I set it as a Variable, to use in the 'To' section of the emailext plugin, the Problem is that the Content of the variable is not resolved once using the mailext part.
pipeline {
agent {
label 'master'
}
options {
timeout(time: 20, unit: 'HOURS')
}
stages {
stage('Find old Projects') {
steps {
sh '''
find $JENKINS_HOME/jobs/* -type f -name "nextBuildNumber" -mtime +1550|egrep -v "configurations|workspace|modules|promotions|BITBUCKET"|awk -F/ '{print $6}'|sort -u >results.txt
'''
}
}
stage('Generate recipient List') {
steps {
sh '''
for Project in `cat results.txt`
do
grep "mail.com" $JENKINS_HOME/jobs/$Project/config.xml|grep -iv "Ansprechpartner" | awk -F'>' '{print $2}'|awk -F'<' '{print $1}'>> recipientList.txt
done
recipientList=`sort -u recipientList.txt`
echo $recipientList
'''
}
}
stage('Generate list to Shelve or Delete') {
steps {
sh '''
for Project in `cat results.txt`
do
if [ -f "$JENKINS_HOME/jobs/$Project/nextBuildNumber" ]; then
nextBuildNumber=`cat $JENKINS_HOME/jobs/$Project/nextBuildNumber`
if [ $nextBuildNumber == '1' ]; then
echo "$JENKINS_HOME/jobs/$Project" >> jobs2Delete.txt
echo "$Project" >> jobList2Delete.txt
else
echo "$JENKINS_URL/job/$Project/shelve/shelveProject" >> Projects2Shelve.txt
echo "$Project" >> ProjectsList2Shelve.txt
fi
fi
done
'''
}
}
stage('Send email') {
steps {
emailext to: 'admin#mail.com',
from: 'jenkins#mail.com',
attachmentsPattern: 'ProjectsList2Shelve.txt,jobList2Delete.txt',
subject: "This is a subject",
body: "Hello\n\nAttached two lists of Jobs, to archive or delete,\nPlease Aprove or Abort the Shelving / Delition of the Projects:\n${env.JOB_URL}\n\nBlue Ocean:\n${env.RUN_DISPLAY_URL}\n\nyour Team"
}
}
stage('Aprove or Abort') {
steps {
input message: 'OK to Shelve and Delete projects? \n Review the jobs list (Projects2Shelve.txt, jobs2Delete.txt) sent to your email', submitter: 'someone'
}
}
stage('Shelve or Delete') {
parallel {
stage('Shelve Project') {
steps {
withCredentials([usernamePassword(credentialsId: 'XYZ', passwordVariable: 'PA', usernameVariable: 'US')]) {
sh '''
for job2Shelve in `cat Projects2Shelve.txt`
do
curl -u $US:$PA $job2Shelve
done
'''
}
}
}
stage('Delete Project') {
steps {
sh '''
for job2Del in `cat jobs2Delete.txt`
do
echo "Removing $job2Del"
done
'''
}
}
}
}
}
post {
success {
emailext to: "$recipientListTest",
from: 'jenkins#mail.com',
attachmentsPattern: 'Projects2Shelve.txt,jobs2Delete.txt',
subject: "This is a sbject",
body: "Hallo\n\nAttached two lists of Jobs which archived or deleted due to inactivity of more the 400 days\n\n\nyour Team"
}
}
}
I figured out that the only way would be to add a script part as part of the post section, together with a variable Definition outside of the Pipeline block:
post {
success {
script {
RECIPIENTLIST = sh(returnStdout: true, script: 'cat recipientListTest.txt')
}
emailext to: "${RECIPIENTLIST}",
from: 'jenkins#mail.com',
attachmentsPattern: 'Projects2Shelve.txt,jobs2Delete.txt',
subject: "MY SUBJECT",
body: "MY BODY"
}
when you execute a sh command, you cannot reuse the variables that you set within that command. You need to do something like this:
on top you your pipeline file to make this variable global
def recipientsList
then execute your shell command and retrieve the output
recipientsList = sh (
script: '''for Project in `cat results.txt`
do
grep "mail.com" $JENKINS_HOME/jobs/$Project/config.xml|grep -iv "Ansprechpartner" | awk -F'>' '{print $2}'|awk -F'<' '{print $1}'>> recipientList.txt
done
recipientList2=`sort -u recipientList.txt`
echo $recipientList2
''',
returnStdout: true
).trim()
Now in your email you can use the variable $recipientList...
I renamed your bash variable to recipientList2 to avoid confusion.
EDIT: I don't know what you want to obtain, but consider using some default recipients provided by emailext:
recipientProviders: [ developers(), culprits(), requestor(), brokenBuildSuspects(), brokenTestsSuspects() ],
Im using the following code to run our voter , currently I’ve one target which is called Run Tests
Which use exactly the same steps as the last (lint) , currently I duplicate it which I think is not a good solution ,
Is there is nice way to avoid this duplication and done it only once as per-requisite process ?
I need all the steps until the cd to the project
The only difference is one target I run
go test ...
and the second
go lint
All steps before are equal
#!/usr/bin/env groovy
try {
parallel(
'Run Tests': {
node {
//————————Here we start
checkout scm
def dockerImage = 'docker.company:50001/crt/deg:0.1.3-09’
setupPipelineEnvironment script: this,
measureDuration(script: this, measurementName: 'build') {
executeDocker(dockerImage: dockerImage, dockerWorkspace: '/go/src') {
sh """
mkdir -p /go/src/github.com/ftr/myGoProj
cp -R $WORKSPACE/* /go/src/github.com/ftr/MyGoProj
cd /go/src/github.com/ftr/MyGoProj
//————————Here we finish and TEST
go test -v ./...
"""
}
}
}
},
‘Lint’: {
node {
//————————Here we start
checkout scm
def dockerImage = 'docker.company:50001/crt/deg:0.1.3-09’
setupPipelineEnvironment script: this,
measureDuration(script: this, measurementName: 'build') {
executeDocker(dockerImage: dockerImage, dockerWorkspace: '/go/src') {
sh """
mkdir -p /go/src/github.com/ftr/myGoProj
cp -R $WORKSPACE/* /go/src/github.com/ftr/MyGoProj
cd /go/src/github.com/ftr/MyGoProj
//————————Here we finish and LINT
go lint
"""
}
}
)
}
}
}
You can use function and pass Go arguments:
try {
parallel(
'Run Tests': {
node {
checkout scm
runTestsInDocker('test -v ./...')
}
},
'Lint': {
node {
checkout scm
runTestsInDocker('lint')
}
}
)
}
def runTestsInDocker(goArgs) {
def dockerImage = 'docker.company:50001/crt/deg:0.1.3-09'
setupPipelineEnvironment script: this,
measureDuration(script: this, measurementName: 'build') {
executeDocker(dockerImage: dockerImage, dockerWorkspace: '/go/src') {
sh """
mkdir -p /go/src/github.com/ftr/myGoProj
cp -R $WORKSPACE/* /go/src/github.com/ftr/MyGoProj
cd /go/src/github.com/ftr/MyGoProj
go ${goArgs}
"""
}
}
}
Update
If some actions can be separated out of runTestsInDocker they probably should be.
For example setupPipelineEnvironment step. I don't know exact logic but maybe it can be run once before running test.
node {
stage('setup') {
setupPipelineEnvironment script: this
}
stage ('Tests') {
parallel(
'Run Tests': {
node {
checkout scm
runTestsInDocker('test -v ./...')
}
},
'Lint': {
node {
checkout scm
runTestsInDocker('lint')
}
}
)
}
}
def runTestsInDocker(goArgs) {
def dockerImage = 'docker.company:50001/crt/deg:0.1.3-09'
measureDuration(script: this, measurementName: 'build') {
executeDocker(dockerImage: dockerImage, dockerWorkspace: '/go/src') {
sh """
mkdir -p /go/src/github.com/ftr/myGoProj
cp -R $WORKSPACE/* /go/src/github.com/ftr/MyGoProj
cd /go/src/github.com/ftr/MyGoProj
go ${goArgs}
"""
}
}
}
Note
If you are running parallel on remote agents you must remember that setup performed on master may be not aviailable on remote slave.