I want my RPM SIGNing job to be resilient so that I can still run my pipelines when one site/node is down - jenkins-pipeline

I have an RPMSIGN job thats configured to run in 2 continents, Europe and North America. The Europe site is the master and N.A. is the failover incase EU is down. The problem is that when EU becomes unavailable, it doesnt failover to US. How can I make it resilient when 1 signing node isn't available so that I can still run my pipelines when one site is down.
Below is a snippet of my code:
stage('Process Signing') {
when {
environment name: 'IS_ARTIFACT_SIGNED', value: 'N'
} // When
tools{
jdk jenkinsToolkits.LoadJdkTool("${buildConfig.JAVA_VERSION}")
maven jenkinsToolkits.LoadMavenTool("${buildConfig.MAVEN_VERSION}")
} // Tools
steps {
script{
String BUILD_NODE
String ARTIFACT_BUILD_HOST
String[] RPM_LIST = readFile("${JOB_BASE_NAME}_${BUILD_NUMBER}.list").split('\n')
for (String RPM_ITEM in RPM_LIST){
ARTIFACT_BUILD_HOST = rpmSign.ReadRpmBuildHost(rpmSign.ReadRpmInfo(pwd(), "${RPM_ITEM}"))
try {
echo "Trying EU jenkins agent for signing"
BUILD_HOST = "eu01" // Valid eu01, eu02, eu04
echo "${BUILD_HOST}"
BUILD_NODE = "${BUILD_HOST}_build"
stash includes: "${RPM_ITEM}", name: "${RPM_ITEM}-${BUILD_NUMBER}"
SIGNING_SERVER = rpmSign.GetSigningServer("${BUILD_HOST}")
node ("${BUILD_NODE}"){
//sh "echo $PATH"
unstash "${RPM_ITEM}-${BUILD_NUMBER}"
rpmSign.ApplyRpmSigning("${SIGNING_SERVER}", "${BUILD_HOST}", "${RPM_ITEM}")
stash includes: "${RPM_ITEM}", name: "${RPM_ITEM}-${BUILD_NUMBER}-SIGNED"
cleanWs()
} // Node
} // Try
catch(err) {
echo "EU jenkins agent failed....trying US jenkins agent"
BUILD_HOST = "us01" // Valid us01, us02, us03
echo "${BUILD_HOST}"
BUILD_NODE = "${BUILD_HOST}_build"
stash includes: "${RPM_ITEM}", name: "${RPM_ITEM}-${BUILD_NUMBER}"
SIGNING_SERVER = rpmSign.GetSigningServer("${BUILD_HOST}")
node ("${BUILD_NODE}"){
//sh "echo $PATH"
unstash "${RPM_ITEM}-${BUILD_NUMBER}"
rpmSign.ApplyRpmSigning("${SIGNING_SERVER}", "${BUILD_HOST}", "${RPM_ITEM}")
stash includes: "${RPM_ITEM}", name: "${RPM_ITEM}-${BUILD_NUMBER}-SIGNED"
cleanWs()
} // Node
} // Catch

Related

Jenkins git checkout stage not able to checkout yml file

I have created a jenkins pipeline for an application. I have following stages in my declarative pipeline.
Checkout
nuget restore
sonar scan start
dotnet build
sonar scan end
build docker image
run container
deploy on google Kubernetes cluster
If I don't include 8th step my pipeline works fine, but if I include 8th step my pipeline works only the first time. For the next runs I will get the below error in the first stage.
I have created a windows machine on Azure and running Jenkins on that machine.
Jenkins file
stages {
stage('Code Checkout') {
steps {
echo 'Cloning project...'
deleteDir()
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/development']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/shailu0287/JenkinsTest.git']]]
echo 'Project cloned...'
}
}
stage('Nuget Restore') {
steps {
echo "nuget restore"
bat 'dotnet restore \"WebApplication4.sln\"'
}
}
stage('Sonar Scan Start'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "Sonar scan start"
echo "${scannerHome}"
bat "${scannerHome}\\SonarScanner.MSBuild.exe begin /k:\"Pan33r\" /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Build Solution') {
steps {
echo "Build Solution"
bat "\"${tool 'MSBUILD_Home'}\" WebApplication4.sln /p:Configuration=Release /p:Platform=\"Any CPU\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER}"
}
}
stage('Sonar Scan End'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "${scannerHome}"
echo "sonar scan end"
bat "${scannerHome}\\SonarScanner.MSBuild.exe end /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Building docker image') {
steps{
script {
echo "Building docker image"
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Containers'){
parallel{
stage("Run PreContainer Checks"){
environment{
containerID = "${bat(script: 'docker ps -a -q -f name="c-Shailendra-master"', returnStdout: true).trim().readLines().drop(1).join("")}"
}
steps{
script{
echo "Run PreContainer Checks"
echo env.containerName
echo "containerID is "
echo env.containerID
if(env.containerID != null){
echo "Stop container and remove from stopped container list too"
bat "docker stop ${env.containerID} && docker rm ${env.containerID}"
}
}
}
}
stage("Publish Docker Image to DockerHub"){
steps{
script {
echo "Pushing docker image to docker hub"
docker.withRegistry( '', registryCredential ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}
}
}
stage('Docker Deployment'){
steps{
echo "${registry}:${BUILD_NUMBER}"
echo "Docker Deployment by using docker hub's image"
bat "docker run -d -p 7200:80 --name c-${containerName}-master ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to GKE') {
steps{
echo "Deployment started ..."
step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'Kubernetes.yml', credentialsId: env.CREDENTIALS_ID, verify deployments: true])
}
}
}
}
If I remove the last step, all my builds work fine. If I include the last step, only the first build works fine then I have to restart the machine. I am not sure what is the issue with the YML file.

How to avoid building twice in a Jenkinsfile in order to have different image names?

In our project, we append the branch name when we build, unless it's master. And we tag them with the build_id and "latest". This is what we want.
- myapp:latest
- myapp:1
- myapp:2
- myapp:3
- myapp-branch-x:latest
- myapp-branch-x:1
- myapp-branch-x:2
- myapp-branch-x:3
- myapp-branch-y:latest
- myapp-branch-y:1
- myapp-branch-y:2
- myapp-branch-y:3
In order to achieve this, we are building twice when it's on the master branc. I think it's weird. How can we avoid that?
pipeline {
environment {
dockerCredentials = 'myapp-dockerhub'
dockerRegistryUrl = 'https://dockerhub.example.com'
imageName = "dockerhub.example.com/myapp/myapp"
build_id = "${BUILD_ID}"
branch_name = "${BRANCH_NAME.toLowerCase().replace('-$', '')}"
app = ''
}
agent any
stages {
stage('Build') {
steps {
script {
app = docker.build(imageName + '-' + branch_name)
withDockerRegistry(credentialsId: dockerCredentials, url: dockerRegistryUrl) {
app.push(build_id)
app.push('latest')
}
}
}
}
stage('Build-Master') {
when {
branch 'master'
}
steps {
script {
app = docker.build(imageName)
withDockerRegistry(credentialsId: dockerCredentials, url: dockerRegistryUrl) {
app.push(build_id)
app.push('latest')
}
}
}
}
}
}
Currently the first stage executes for all situations, and the second stage executes only on the master branch according to the when expression when { branch 'master' }. You can add a when expression to the first stage to only build non-master branches. Then the master branch will not execute for both stages:
stage('Build') {
when { not { branch 'master' } }
...
}
You can check the when expression documentation for more information.

How to configure jenkins pipeline with logstash plugin?

Usecase: I want to send jenkins job console log to elasticsearch, from there to kibana so that i can visualise the data.
I am using logstash plugin to achieve this. For freestyle job logstash plugin configuration is working fine but for jenkins pipeline jobs I am getting all required data like build number, job name, build duration and all but it is not showing the build result i.e., success or failure it is not showing.
I tried in two ways:
1.
stage('send to ES') {
logstashSend failBuild: true, maxLines: -1
}
2.
timestamps {
logstash {
node() {
sh'''
echo 'Hello, World!'
'''
try {
stage('GitSCM')
{
git url: 'github repo.git'
}
stage('Initialize')
{
jdk = tool name: 'jdk'
env.JAVA_HOME = "${jdk}"
echo "jdk installation path is: ${jdk}"
sh "${jdk}/bin/java -version"
sh '$JAVA_HOME/bin/java -version'
def mvnHome = tool 'mvn'
}
stage('Build Stage')
{
def mvnHome = tool 'mvn'
sh "${mvnHome}/bin/mvn -B verify"
}
currentBuild.result = 'SUCCESS'
} catch (Exception err) {
currentBuild.result = 'FAILURE'
}
}
}
}
But in both ways I am not getting build result i.e., success or failure in my elasticsearch or kibana.
Can someone help.
I didn't find a clear way to do that, my solution was add those lines at the end of the Jenkinsfile:
echo "Current result: ${currentBuild.currentResult}"
logstashSend failBuild: true, maxLines: 3
In my case, I dont need it to send all console logs, only one log with the result per job.

Jenkins declarative pipe, download latest upload (build) from Artifactory and get properties

Any sugesstions on this litle problem is very welcome! :)
It works fine to download the latest build but the object does not contain any properties.
Is it possible to get the properties from a downloaded build?
The gool is to get an inputbox with a predefined value displaying previous version i.e. "R1G" and give the user the option to edit the value to i.e. R2A or any other value or only abort (abort meaning there will be no version).
The user also have the option to do nothing withch will led to a timeoute and finaly an abort.
I want to
download latest build from Artifactory repo
store the build.number in "def prev_build"
display the prev_build in an input for the user to be updated (a customized number)
'''some code
echo 'Publiching Artifact.....'
script{
def artifactory_server_down=Artifactory.server 'Artifactory'
def downLoad = """{
"files":
[
{
"pattern": "reponame/",
"target": "${WORKSPACE}/prev/",
"recursive": "false",
"flat" : "false"
}
]
}"""
def buildInfodown=artifactory_server_down.download(downLoad)
//Dont need to publish because I only need the properties
//Grab the latest revision name here and use it again
echo 'Retriving revision from last uploaded build.....'
env.LAST_BUILD_NAME=buildInfodown.build.number
//Yes its a map and I have tried with ['build.number'] but the map is empty
}
echo "Previous build name is $env.LAST_BUILD_NAME" //Will not contain the old (latest)
''' End of snipet
The output is null or the default value I have given the var, not the expected version number.
Yes, firstly the properties should be present in the artifacts you are trying to download.
The build.number etc are part of the buildinfo.json file of the artifacts. these are not properties but metadata of some kind. this info would be visible under "Builds" menu in artifactory. Select the repo and build number.
At the last column/tab there would be buildinfo. Click on that - this file will hold all the info you need corresponding to the artifacts.
The build.number and other info will be pushed/uploaded to artifactory by the CI.
For example in case of Jenkins there is an option available when trying to push to artifactory "Capture and publish build info" --> this step does the work
Thanks a lot for your help.
I see your suggestion works but I had when I got your answer already implemented another solution that also works well
I am using the available query language.
https://www.jfrog.com/confluence/display/RTF/Artifactory+Query+Language
Just before my pipeline declaration in the pipeline file I added
def artifactory_url = 'https://lote.corp.saab.se:8443/artifactory/api/search/aql'
def artifactory_search = 'items.find({ "repo":"my_repo"},{"#product.productNumber":
{"$match":"produktname"}}).sort({"$desc":["created"]})'
pipeline
{
and ...
stage('Get latest revision') {
steps {
script {
def json_text = sh(script: "curl -H 'X-JFrog-Art-Api:${env.RECIPE_API_KEY}' -X POST '${artifactory_url}' -d '${artifactory_search}' -H 'Content-Type: text/plain' -k", returnStdout: true).trim()
def response = readJSON text: json_text
VERSION = response.results[0].path;
echo "${VERSION}"
println 'using each & entry'
response[0].each{ entry ->
println 'Key:' + entry.key + ', Value:' +
entry.value
}
}
}
}
stage('Do relesase on master')
{
when
{
branch "master"
}
options {
timeout(time: 1, unit: 'HOURS')
}
steps {
script{
RELEASE_SCOPE = input message: 'User input
required', ok: 'Ok to go?!',
parameters: [
choice(name: 'RELEASE_TYPE', choices:
'Artifactory\nClearCaseAndArtifactory\nAbort',
description: 'What is the release scope?'),
string(name: 'VERSION', defaultValue:
VERSION, description: '''Edit release name please!!''',
trim: false)
]
}
echo 'Build both RPM and Zip packages'
... gradlew -Pversion=${RELEASE_SCOPE['VERSION']} clean buildPackages"
script {
def artifactory_server = Artifactory.server 'Artifactory'
def buildInfo = Artifactory.newBuildInfo()
def uploadSpec = """{
"files":[
{
"pattern": "${env.WORKSPACE}/prodname/release/build/distributions/prodname*.*",
"target": "test_repo/${RELEASE_SCOPE['VERSION']}/",
"props": "product.name=ProdName;build.name=${JOB_NAME};build.number=${env.BUILD_NUMBER};product.revision=${RELEASE_SCOPE['VERSION']};product.productNumber=produktname"
}
]
}"""
println(uploadSpec)
artifactory_server.upload(uploadSpec)
}
}
}

How to trigger a multiple run in a single pipeline job of jenkins?

I have a pipeline job which run with below pipeline groovy script,
pipeline {
parameters{
string(name: 'Unique_Number', defaultValue: '', description: 'Enter Unique Number')
}
stages {
stage('Build') {
agent { node { label 'Build' } }
steps {
script {
sh build.sh
}
}
stage('Deploy') {
agent { node { label 'Deploy' } }
steps {
script {
sh deploy.sh
}
}
stage('Test') {
agent { node { label 'Test' } }
steps {
script {
sh test.sh
}
}
}
}
I just trigger this job multiple times with different unique ID number as input parameter. So as a result i will have multiple run/build for this job at different stages.
With this, i need to trigger a multiple run/build to be promote to next stage (i.e., from build to deploy or from deploy to test) in this pipeline job as a one single build instead of triggering each and every single run/build to next stage. Is there any possibility?
I was also trying to do the same thing and found no relevant answers. May this help to someone.
This will read a file that contains the Jenkins Job name and run them iteratively from one single job.
Please change below code accordingly in your Jenkins.
pipeline {
agent any
stages {
stage('Hello') {
steps {
script{
git branch: 'Your Branch name', credentialsId: 'Your crendiatails', url: ' Your BitBucket Repo URL '
##To read file from workspace which will contain the Jenkins Job Name ###
def filePath = readFile "${WORKSPACE}/ Your File Location"
##To read file line by line ###
def lines = filePath.readLines()
##To iterate and run Jenkins Jobs one by one ####
for (line in lines) {
build(job: "$line/branchName",
parameters:
[string(name: 'vertical', value: "${params.vertical}"),
string(name: 'environment', value: "${params.environment}"),
string(name: 'branch', value: "${params.aerdevops_branch}"),
string(name: 'project', value: "${params.host_project}")
]
)
}
}
}
}
}
}
You can start multiple jobs from one pipeline if you run something as:
build job:"One", wait: false
build job:"Two", wait: false
Your main job starts children pipelines and children pipelines will run in parallel.
You can read PipeLine Build Step documentation for more information.
Also, you can read about the parallel run in declarative pipeline
Here you can find a lot of examples for parallel running

Resources