Checking in jenkins whether an artefact exists for another job and branch - jenkins-pipeline

I would like to copy an artefact depending on whether it exists in another job. I currently have the following code within a step:
script {
copyArtifacts filter: "dist.tar.gz",
projectName: 'Frontend/${BRANCH_NAME}',
selector: lastSuccessful(),
target: "./public"
}
Now, for example, I want something like this (I realise the syntax is wrong/stupid):
script {
if(file_exists('Frontend/${BRANCH_NAME}').lastSuccessful()) {
copyArtifacts filter: "dist.tar.gz",
projectName: 'Frontend/${BRANCH_NAME}',
selector: lastSuccessful(),
target: "./public"
}
}
So if there is a latest build for the frontend/${BRANCH_NAME}, I would like to execute what it says below. I could now try to perform a check via the browser URL, for example, but isn't there a more elegant solution to do this internally?
I am using a multibranch, thats the reason why I want handle it different.
Edit:
I guess I need to describe my problem a little better after reviewing Geralds suggested solution!
Thank you Gerald for your solution! Unfortunately, it doesn't really fit my problem. You check whether a certain file exists in a project in the branch.
What I need: There is a multi-branch pipeline in which an artefact is created in the respective branch pipeline (e.g. dist.tar.gz) and I would like to check whether this dist.tar.gz is available as an artefact and take the latest version of it.
I can certainly jump over the file system here, but my thought was whether this can be done directly via available methods in Jenkins.
With copyArtifacts, I can simply specify the project name and branch, and it finds the rest on its own. But if the file does not exist, it fails, for example. And finally, depending on the branch, the result is slightly different for my source project, so I need this check if a file exists within another project for a branch. As I said, I am referring to the pipeline and its result (=Artifact) and not to a file in the Git branch.
Edit 2:
Okay, a little progress:
I can definitely retrieve what the last successful build in the multipipeline was with the following code:
def jobName = "My folder/multipipeline/master".
def buildName = Jenkins.instance.getItemByFullName(jobName)
println "Job type: ${buildName.getClass()}"
println "Last success: ${buildName.getLastSuccessfulBuild()}"
println "All builds: ${buildName.getBuilds().collect{ it.getNumber()}}"
println "Last build: ${buildName.getLastBuild()}"
Then displays something like:
Job type: class org.jenkinsci.plugins.workflow.job.WorkflowJob
Last success: my folder/multipipeline/master/dev #21
All builds: [21, 20, 19, 18, 17]
Last build: my folder/multipipeline/master/dev #21
Now I should be able to access an artefact within the Jenkins instance or check if it exists at all

environment {
otherProjectWksp = '../other-project-with-Git-repo/'
otherProjectBranch = 'master'
otherProjectRemoteBranch = 'origin/master'
checkFileExists = 'README.md'
//checkFileExists = 'NOT_EXISTING' // for testing
fileExistsStatus = '-1'
}
stages {
stage('Check if file exists in other project\'s remote branch') {
steps {
dir( otherProjectWksp ) {
script {
fileExistsStatus = sh script: """
#!/bin/bash
git cat-file -e ${otherProjectRemoteBranch}:${checkFileExists} && echo ${checkFileExists} exists
""",
returnStatus: true
}
} // dir
echo "fileExistsStatus: ${fileExistsStatus}"
}
} // stage Check file existence
stage('Copy artifacts') {
when { expression { fileExistsStatus == '0' } }
steps {
echo "Copying artifacts..."
// ...
}
} // stage Copy artifacts
}
Console Output
...
[Pipeline] stage
[Pipeline] { (Check if file exists in other project's remote branch)
[Pipeline] dir
Running in /var/lib/jenkins/workspace/other-project-with-Git-repo
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ git cat-file -e origin/master:README.md
+ echo README.md exists
README.md exists
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // dir
[Pipeline] echo
fileExistsStatus: 0
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Copy artifacts)
[Pipeline] echo
Copying artifacts...
...

Related

Keep workspace when switching stages in combination with agent none

I have a Jenkins pipeline where I want to first build my project (Stage A) and trigger an asynchronous long running external test process with the built artifacts. The external test process then resumes the Job using a callback. Afterwards (Stage B) performs some validations of the test results and attaches them to the job. I don't want to block an executor while the external test process is running so I came up with the following Jenkinsfile which mostly suites my needs:
#!/usr/bin/env groovy
pipeline {
agent none
stages {
stage('Stage A') {
agent { docker { image 'my-maven:0.0.17' } }
steps {
script {
sh "rm testfile.txt"
sh "echo ABCD > testfile.txt"
sh "cat testfile.txt"
}
}
}
stage('ContinueJob') {
agent none
input { message "The job will continue once the asynchronous operation has finished" }
steps { echo "Job has been continued" }
}
stage('Stage B') {
agent { docker { image 'my-maven:0.0.17' } }
steps {
script {
sh "cat testfile.txt"
def data = readFile(file: 'testfile.txt')
if (!data.contains("ABCD")) {
error("ABCD not found in testfile.txt")
}
}
}
}
}
}
However, depending on the load of the Jenkins or the time passed or some unknown other conditions, sometimes the files that I create in "Stage A" are no longer available in "Stage B". It seems that Jenkins switches to a different Docker node which causes the loss of workspace data, e.g. in the logs I can see:
[Pipeline] { (Stage A)
[Pipeline] node
Running on Docker3 in /var/opt/jenkins/workspace/TestJob
.....
[Pipeline] stage
[Pipeline] { (Stage B)
[Pipeline] node
Running on Docker2 in /var/opt/jenkins/workspace/TestJob
Whereas with a successful run, it keeps using e.g. node "Docker2" for both stages.
Note that I have also tried reuseNode true within the two docker sections but that didn't help either.
How can I tell Jenkins to keep my workspace files available?
As pointed out by the comment from #Patrice M. if the files are not that big (which is the case for me) stash/unstash are very useful to solve this problem. I have used this combination now since a couple of months and it has solved my issue.

Parallel execution 'mvn test' in Jenkins

I try to create jenkinsfile for parallel execution command mvn test with different arguments. On the first stage of jenkinsfile I create *.csv file where are will be future arguments for mvn test command. Also I don't know the quantity of parallel stages (it depends on first stage where I get data from DB). So, summarize it again. Logic:
First stage for getting data from DB over command mvn test (with args). On this test I save data into csv file.
In loop of jenkinsfile I read every string, parse it and get args for arallel execution mvn test (with args based on the parsed data).
Now it looks like this (only necessary fragments of jenkinsfile):
def buildProject = { a, b, c ->
node {
stage(a) {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
sh "mvn -Dtest=test2 test -Darg1=${b} -Darg2=${c}"
}
}
}
}
stages {
stage('Preparation of file.csv') {
steps {
sh 'mvn -Dtest=test1 test'
}
}
stage('Parallel stage') {
steps {
script {
file = readFile "file.csv"
lines = file.readLines()
def branches = [:]
for(i = 0; i < lines.size(); i++) {
values = lines[i].split(';')
branches["${values[0]}"] = { buildProject(values[0], values[1], values[2]) }
}
parallel branches
}
}
}
}
So, which problems do I face now with?
I see in log following error:
[ERROR] The goal you specified requires a project to execute but there is no POM in this directory (/Data/jenkins/workspace//#2)
I look at workspaces of Jenkins and see that there were created several empty(!!!) directories (quantity equals to quantity of parallel stages). And therefore mvn command could be executed because of absence of pom.xml and other files.
In branches the same data are saved on every iteration of loop and in 'stage(a)' I see the same title (but every iteration of loop has unique 'values[0]').
Can you help me with this issue?
Thank you in advance!
So, regarding this jenkins issue issues.jenkins.io/browse/JENKINS-50307 and workaround which could be found there, task could be closed!

Returning Boolean value in Groovy function when Maven build fails in shell script

I have wrote a Jenkins Pipeline Groovy for executing multiple project maven sonar analysis. The code is working fine but the issue is that sometimes build fails for some projects which I need to track it properly. My executeMavenSonarBuild function is given as below
def executeMavenSonarBuild(projectName) {
stage ('Execute Maven Build for '+projectName)
{
sh """ {
cd ${projectName}/
mvn clean install verify sonar:sonar
} || {
echo 'Build Failed'
}
"""
}
return true;
}
If build fails it prints echo 'Build Failed' but how we can return a false Boolean as the return to the function.
You have to get the status from the mvn call itself..which should look like this:
def result = sh ( script: 'mvn ...', returnStatus: true)

Jenkins Pipeline Fails if Step is Unstable

Currently my pipeline fails (red), when a maven-job is unstable (yellow).
node {
stage 'Unit/SQL-Tests'
parallel (
phase1: { build 'Unit-Tests' }, // maven
phase2: { build 'SQL-Tests' } // shell
)
stage 'Integration-Tests'
build 'Integration-Tests' // maven
}
In this example the job Unit-Test's result is unstable, but is shown as failed in the pipeline.
How can I change the jobs/pipeline/jenkins to have the (1) the pipeline step unstable instead of failed and (2) the pipeline's status unstable instead of failed.
I tried adding the MAVEN_OPTS parameter -Dmaven.test.failure.ignore=true, but that did not solve the issue. I am unsure how to wrap the build 'Unit-Test' into some logic that can catch and process the result.
Adding a sub-pipeline with this logic doesn't do the trick, as there is no option to checkout from subversion (that option is available in a regular maven job). I would not like to use commandline checkout if possible.
Lessons learned:
Jenkins will continuously update the pipeline according to the currentBuild.result value which can be either SUCCESS, UNSTABLE or FAILURE (source).
The result of build job: <JOBNAME> can be stored in a variable. The build status is in variable.result.
build job: <JOBNAME>, propagate: false will prevent the whole build from failing right away.
currentBuild.result can only get worse. If that value was previously FAILED and receives a new status SUCCESS through currentBuild.result = 'SUCCESS' it will stay FAILED
This is what I finally used:
node {
def result // define the variable once in the beginning
stage 'Unit/SQL-Tests'
parallel (
phase1: { result = build job: 'Unit', propagate: false }, // might be UNSTABLE
phase2: { build 'SQL-Tests' }
)
currentBuild.result = result.result // update the build status. jenkins will update the pipeline's current status accordingly
stage 'Install SQL'
build 'InstallSQL'
stage 'Deploy/Integration-Tests'
parallel (
phase1: { build 'Deploy' },
phase2: { result = build job: 'Integration-Tests', propagate: false }
)
currentBuild.result = result.result // should the Unit-Test be FAILED and Integration-Test SUCCESS, then the currentBuild.result will stay FAILED (it can only get worse)
stage 'Code Analysis'
build 'Analysis'
}
Whatever the step is UNSTABLE or FAILED, the final build result in your script will be FAILED.
You can add propagate to false by default to avoid fail the flow.
def result = build job: 'test', propagate: false
In the end of the flow, you can verdict the final result based on what you got from the "result" variable.
For example
currentBuild.result='UNSTABLE'
Here is a detail example
How to set current build result in Pipeline

How to continue a Jenkins build even though a build step failed?

I am using a Phing build script with Jenkins and would like to run it end to end on a job and capture all the reports. The problem is it stop building on a failed build step. Is there a way or a plugin that would continue the job even on failures?
Thanks
I don't know a lot about Phing but, since it's based on Ant, if the build step you are executing has a "failonerror" attribute you should be able to set it to false so that the entire build doesn't fail if the step returns an error.
Yes, use try, catch block in you pipeline scripts
example:
try {
// do some stuff that potentially fails
} catch (error) {
// do stuff if try fails
} finally {
// when you need some clean up to do
}
Or alternatively if you use sh commands to run these tests, consider running your sh scripts with the "|| true" suffix, this tells the linux sh script to exit with a result code of 0, even if your real command exited with an exit code.
example:
stage('Test') {
def testScript = ""
def testProjects = findFiles(glob: 'test/**/project.json')
if (!fileExists('reports/xml')) {
if (!fileExists('reports')) {
sh "mkdir reports"
}
sh "mkdir reports/xml"
}
for(prj in testProjects) {
println "Test project located, running tests: " + prj.path
def matcher = prj.path =~ 'test\\/(.+)\\/project.json'
testScript += "dotnet test --no-build '${prj.path}' -xml 'reports/xml/${matcher[0][1]}.Results.xml' || true\n"
}
sh testScript

Resources