Set a stage status in Jenkins Pipelines - jenkins-pipeline

Is there any way in a scripted pipeline to mark a stage as unstable but only show that stage as unstable without marking every stage as unstable in the output?
I can do something like this:
node()
{
stage("Stage1")
{
// do work (passes)
}
stage("Stage2")
{
// something went wrong, but it isn't catastrophic...
currentBuild.result = 'UNSTABLE'
}
stage("Stage3")
{
// keep going...
}
}
But when I run this, Jenkins marks everything as unstable... but I'd like the first and last stages to show green if possible and just the stage that had an issue to go yellow.
It's ok if the whole pipeline gets flagged unstable, but it might also be nice to have a later stage over ride that and set the final-result to pass if possible too.

This is now possible:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'UNSTABLE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as unstable. I use a declarative pipeline in the example, but it should work the same in a scripted pipeline.
As you might have guessed, you can freely change the buildResult and stageResult to any combination. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature. Upgrade any plugins affected by bug JENKINS-39203, such as:
Pipeline Graph Analysis plugin (at least version 1.10 to mark single stages as 'UNSTABLE')
Pipeline Basic Steps plugin (at least version 2.18 to set stageResult in catchError)

Also worth mentioning are the warnError and unstable steps, released on Jul/2019, as part of the "Jenkins Pipeline Stage Result Visualization Improvements".
They are meant to allow you to mark a stage as unstable (this appears as an amber warning icon, and not a red error icon), while the rest of the build is still marked as successful.
Examples (lifted from the link above):
warnError acts like catchError, and changes the stage to have a warning if any part of the block fails:
warnError('Script failed!') {
sh('false')
}
unstable is a directive-style step, which you can use to mark the current stage as unstable:
try {
sh('false')
} catch (ex) {
unstable('Script failed!')
}

an elegant way I found to set only the stage result but not change the build result is this:
catchError(stageResult: 'UNSTABLE', buildResult: currentBuild.result) {
error 'example of throwing an error'
}

I know this question is a few years old but would like to offer an alternative for people who are coming across this problem. The accepted answer is close to what I needed, but the issue is that catchError does not allow for an alternate set of code to execute when an error occurs in its block like a try/catch does. I got around this in the following way:
stage('Stage 2') {
steps {
echo 'In stage 2'
script {
try {
echo 'In stage 2: try block'
sh 'python3 ./python/script.py'
}
catch(Exception e) {
echo 'In stage 2: catch block'
catchError(buildResult: 'UNSTABLE', stageResult: 'FAILURE') {
echo 'In stage 2: catchError block'
sh 'exit 1'
}
}
}
}
}
When my Python script raises an exception, whatever is in the catch block will execute per usual try/catch logic. Putting the catchError there as well and raising a bogus exception in it guarantees that my build and stage statuses will come out how I want (e.g. all stages with status = SUCCESS except for Stage 2, and a build status = UNSTABLE).
There's no question that this is an awkward workaround. However, the Jenkins developers insist (see here) that "To ensure consistency the FAILED state should always fail the pipeline."
My code snippet above is just simple proof of concept, but in my production code I have a legitimate need to fail a stage, yet mark the entire build as UNSTABLE. The noble intentions behind designing Jenkins declarative pipelines with so much rigor are admirable, but really inconvenient and unnecessary in my opinion.
Hopefully this workaround helps someone. If I'm missing something then I'm open to looking at a cleaner way of doing this.

Related

How can we trigger Jenkins build in iterative method

I have two jobs in Jenkins upstream and downstream.
When I trigger upstream job, below files has to be renamed as package.xml and deploy to downstream in iterative way. How can I make this done with shell script.
Any Idea?
pkg1.xml
pkg2.xml
pkg3.xml
i'm not sure what you're trying to do with the second part (maybe file another more detailed question), but here's how to rename files:
pipeline {
agent { label 'docker' }
stages {
stage('build') {
steps {
# you don't need to create these files.
# this was just for my testing.
sh 'touch pkg1.xml pkg2.xml pkg3.xml'
sh "rename 's/pkg/package/' *"
}
}
}
}

What is the equivalent of an Ant taskdef in Gradle?

I've been struggling with this for a day and a half or so. I'm trying to replicate the following Ant concept in Gradle:
<target name="test">
...
<runexe name="<filename> params="<params>" />
...
</target>
where runexe is declared elsewhere as
<macrodef name="runexe" >
...
</macrodef>
and might also be a taskdef or a scriptdef i.e. I'd like to be able to call a reusable, pre-defined block of code and pass it the necessary parameters from within Gradle tasks. I've tried many things. I can create a task that runs the exe without any trouble:
task runexe(type: Exec){
commandLine 'cmd', '/c', 'dir', '/B'
}
task test(dependsOn: 'runexe') {
runexe {
commandLine 'cmd', '/c', 'dir', '/N', 'e:\\utilities\\'
}
}
test << {
println "Testing..."
// I want to call runexe here.
...
}
and use dependsOn to have it run. However this doesn't allow me to run runexe precisely when I need to. I've experimented extensively with executable, args and commandLine. I've played around with exec and tried several different variations found here and around the 'net. I've also been working with the free books available from the Gradle site.
What I need to do is read a list of files from a directory and pass each file to the application with some other arguments. The list of files won't be known until execution time i.e. until the script reads them, the list can vary and the call needs to be made repeatedly.
My best option currently appears to be what I found here, which may be fine, but it just seems that there should be a better way. I understand that tasks are meant to be called once and that you can't call a task from within another task or pass one parameters but I'd dearly like to know what the correct approach to this is in Gradle. I'm hoping that one of the Gradle designers might be kind enough to enlighten me as this is a question asked frequently all over the web and I'm yet to find a clear answer or a solution that I can make work.
If your task needs to read file names, then I suggest to use the provided API instead of executing commands. Also using exec will make it OS specific, therefore not necessarily portable on different OS.
Here's how to do it:
task hello {
doLast {
def tree = fileTree(dir: '/tmp/test/txt')
def array = []
tree.each {
array << it
print "${it.getName()} added to array!\n"
}
}
}
I ultimately went with this, mentioned above. I have exec {} working well in several places and it seems to be the best option for this use case.
To please an overzealous moderator, that means this:
def doMyThing(String target) {
exec {
executable "something.sh"
args "-t", target
}
}
as mentioned above. This provides the same ultimate functionality.

Task based on Copy appears not to execute()

I have the following code in my build.gradle file:
task untarServer(type:Copy) {
from tarTree('build/libs/server.tar.gz')
into project.ext.tomcat + '../'
} << {
println 'Unpacked, waiting for tomcat to deploy wars ...'
sleep 10000
}
task deploy << {
tarball.execute()
untarServer.execute()
stopTomcat.execute()
startTomcat.execute()
}
Everything works great except untarServer, which appears not to run at all. Untar is based on an example from the documentation. Probably I've done something silly due to the late hour, but I'm missing it. How can I fix this? I have in the back of my mind that I could drop down to using ant.untar, but I'd like to do it the native gradle way if possible.
Edit: How do I know it's not running? because the println statement never shows up, the buld does not pause for 10 seconds, and the contents of the tarball do not show up in the "into" location.
I haven't seen the } << { syntax before; I'd change it to doLast { ... } inside the curly braces or untarServer << { (as a separate statement). Also, you should never call execute() on a task. It's totally unsupported and bad things will happen. Instead you should establish a task relationship (dependsOn, mustRunAfter, finalizedBy).

Controlling Gradle task execution

In my build.gradle script, I have a lot of tasks, each depending on zero or more other tasks.
There are three 'main' tasks which can be called: moduleInstallation, backupFiles and restoreFiles.
Here's the question: I would like to be able to tell Gradle which tasks to execute and which don't need to execute. For example, when calling moduleInstallation, I want all depending tasks to execute (regardless of their UP-TO-DATE flag), but not the restore tasks. I've tried altering the phase in which the tasks get executed (e.g. config phase, execution phase,...) and a couple of other things, but all tasks just keep getting executed.
A solution I've thought of was just stating in the main tasks that, when this main task is called (f.e. moduleInstallation), we set the UP-TO-DATE flag of all non-related tasks to false, so they don't get executed. Is that possible?
EDIT: Here's an example:
When moduleInstallation is called (which depends on backupFiles), restoreFiles (which depends on restoreFromDate) is executed too.
First main action
task moduleInstallation << {
println "Hello from moduleInstallation"
}
task backupFiles {
doLast {
println "Hello from backupFiles"
}
}
Second main action
task restoreFiles {
println "Hello from restoreFiles"
}
task restoreFromDate {
println "Hello from restoreFromDate"
}
Dependencies:
moduleInstallation.dependsOn backupFiles
restoreFiles.dependsOn restoreFromDate
So when I type gradle moduleInstallation in the terminal, I get the following output:
Hello from restoreFromDate
Hello from restoreFiles
Hello from backupFiles
Hello from moduleInstallation
The second snippet has to use doLast (or its << shortcut) like the first snippet. Otherwise, the code is configuration code and will always be evaluated, no matter which tasks are eventually going to be executed. In other words, it's not the restoreFiles and restoreFromDate tasks that are being executed here (as one can tell from the bits of command line output that you don't show), but (only) their configuration code.
To better understand what's going on here (which is crucial for understanding Gradle), I recommend to study the Build Lifecycle chapter in the Gradle User Guide.

How to mark a build unstable in Jenkins when running shell scripts

In a project I'm working on, we are using shell scripts to execute different tasks. Some are sh/bash scripts that run rsync, and some are PHP scripts. One of the PHP scripts is running some integration tests that output to JUnit XML, code coverage reports, and similar.
Jenkins is able to mark the jobs as successful / failed based on exit status. In PHP, the script exits with 1 if it has detected that the tests failed during the run. The other shell scripts run commands and use the exit codes from those to mark a build as failed.
// :: End of PHP script:
// If any tests have failed, fail the build
if ($build_error) exit(1);
In Jenkins Terminology, an unstable build is defined as:
A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.
How can I get Jenkins to mark a build as unstable instead of only success / failed when running shell scripts?
Modern Jenkins versions (since 2.26, October 2016) solved this: it's just an advanced option for the Execute shell build step!
You can just choose and set an arbitrary exit value; if it matches, the build will be unstable. Just pick a value which is unlikely to be launched by a real process in your build.
It can be done without printing magic strings and using TextFinder. Here's some info on it.
Basically you need a .jar file from http://yourserver.com/cli available in shell scripts, then you can use the following command to mark a build unstable:
java -jar jenkins-cli.jar set-build-result unstable
To mark build unstable on error, you can use:
failing_cmd cmd_args || java -jar jenkins-cli.jar set-build-result unstable
The problem is that jenkins-cli.jar has to be available from shell script. You can either put it in easy-to-access path, or download in via job's shell script:
wget ${JENKINS_URL}jnlpJars/jenkins-cli.jar
Use the Text-finder plugin.
Instead of exiting with status 1 (which would fail the build), do:
if ($build_error) print("TESTS FAILED!");
Than in the post-build actions enable the Text Finder, set the regular expression to match the message you printed (TESTS FAILED!) and check the "Unstable if found" checkbox under that entry.
You should use Jenkinsfile to wrap your build script and simply mark the current build as UNSTABLE by using currentBuild.result = "UNSTABLE".
stage {
status = /* your build command goes here */
if (status === "MARK-AS-UNSTABLE") {
currentBuild.result = "UNSTABLE"
}
}
you should also be able to use groovy and do what textfinder did
marking a build as un-stable with groovy post-build plugin
if(manager.logContains("Could not login to FTP server")) {
manager.addWarningBadge("FTP Login Failure")
manager.createSummary("warning.gif").appendText("<h1>Failed to login to remote FTP Server!</h1>", false, false, false, "red")
manager.buildUnstable()
}
Also see Groovy Postbuild Plugin
In my job script, I have the following statements (this job only runs on the Jenkins master):
# This is the condition test I use to set the build status as UNSTABLE
if [ ${PERCENTAGE} -gt 80 -a ${PERCENTAGE} -lt 90 ]; then
echo WARNING: disc usage percentage above 80%
# Download the Jenkins CLI JAR:
curl -o jenkins-cli.jar ${JENKINS_URL}/jnlpJars/jenkins-cli.jar
# Set build status to unstable
java -jar jenkins-cli.jar -s ${JENKINS_URL}/ set-build-result unstable
fi
You can see this and a lot more information about setting build statuses on the Jenkins wiki: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
Configure PHP build to produce xml junit report
<phpunit bootstrap="tests/bootstrap.php" colors="true" >
<logging>
<log type="junit" target="build/junit.xml"
logIncompleteSkipped="false" title="Test Results"/>
</logging>
....
</phpunit>
Finish build script with status 0
...
exit 0;
Add post-build action Publish JUnit test result report for Test report XMLs. This plugin will change Stable build to Unstable when test are failing.
**/build/junit.xml
Add Jenkins Text Finder plugin with console output scanning and unchecked options. This plugin fail whole build on fatal error.
PHP Fatal error:
Duplicating my answer from here because I spent some time looking for this:
This is now possible in newer versions of Jenkins, you can do something like this:
#!/usr/bin/env groovy
properties([
parameters([string(name: 'foo', defaultValue: 'bar', description: 'Fails job if not bar (unstable if bar)')]),
])
stage('Stage 1') {
node('parent'){
def ret = sh(
returnStatus: true, // This is the key bit!
script: '''if [ "$foo" = bar ]; then exit 2; else exit 1; fi'''
)
// ret can be any number/range, does not have to be 2.
if (ret == 2) {
currentBuild.result = 'UNSTABLE'
} else if (ret != 0) {
currentBuild.result = 'FAILURE'
// If you do not manually error the status will be set to "failed", but the
// pipeline will still run the next stage.
error("Stage 1 failed with exit code ${ret}")
}
}
}
The Pipeline Syntax generator shows you this in the advanced tab:
I find the most flexible way to do this is by reading a file in the groovy post build plugin.
import hudson.FilePath
import java.io.InputStream
def build = Thread.currentThread().executable
String unstable = null
if(build.workspace.isRemote()) {
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/build.properties")
InputStream is = fp.read()
unstable = is.text.trim()
} else {
fp = new FilePath(new File(build.workspace.toString() + "/build.properties"))
InputStream is = fp.read()
unstable = is.text.trim()
}
manager.listener.logger.println("Build status file: " + unstable)
if (unstable.equalsIgnoreCase('true')) {
manager.listener.logger.println('setting build to unstable')
manager.buildUnstable()
}
If the file contents are 'true' the build will be set to unstable. This will work on the local master and on any slaves you run the job on, and for any kind of scripts that can write to disk.
I thought I would post another answer for people that might be looking for something similar.
In our build job we have cases where we would want the build to continue, but be marked as unstable. For ours it's relating to version numbers.
So, I wanted to set a condition on the build and set the build to unstable if that condition is met.
I used the Conditional step (single) option as a build step.
Then I used Execute system Groovy script as the build step that would run when that condition is met.
I used Groovy Command and set the script the following
import hudson.model.*
def build = Thread.currentThread().executable
build.#result = hudson.model.Result.UNSTABLE
return
That seems to work quite well.
I stumbled upon the solution here
http://tech.akom.net/archives/112-Marking-Jenkins-build-UNSTABLE-from-environment-inject-groovy-script.html
In addition to all others answers, jenkins also allows the use of the unstable() method (which is in my opinion clearer).
This method can be used with a message parameter which describe why the build is unstable.
In addition of this, you can use the returnStatus of your shell script (bat or sh) to enable this.
For example:
def status = bat(script: "<your command here>", returnStatus: true)
if (status != 0) {
unstable("unstable build because script failed")
}
Of course, you can make something with more granularity depending on your needs and the return status.
Furthermore, for raising error, you can also use warnError() in place of unstable(). It will indicate your build as failed instead of unstable, but the syntax is same.
The TextFinder is good only if the job status hasn't been changed from SUCCESS to FAILED or ABORTED.
For such cases, use a groovy script in the PostBuild step:
errpattern = ~/TEXT-TO-LOOK-FOR-IN-JENKINS-BUILD-OUTPUT.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
if (errmatcher.find()) {
manager.build.#result = hudson.model.Result.NEW-STATUS-TO-SET
}
}
See more details in a post I've wrote about it:
http://www.tikalk.com/devops/JenkinsJobStatusChange/
As a lighter alternative to the existing answers, you can set the build result with a simple HTTP POST to access the Groovy script console REST API:
curl -X POST \
--silent \
--user "$YOUR_CREDENTIALS" \
--data-urlencode "script=Jenkins.instance.getItemByFullName( '$JOB_NAME' ).getBuildByNumber( $BUILD_NUMBER ).setResult( hudson.model.Result.UNSTABLE )" $JENKINS_URL/scriptText
Advantages:
no need to download and run a huge jar file
no kludges for setting and reading some global state (console text, files in workspace)
no plugins required (besides Groovy)
no need to configure an extra build step that is superfluous in the PASSED or FAILURE cases.
For this solution, your environment must meet these conditions:
Jenkins REST API can be accessed from slave
Slave must have access to credentials that allows to access the Jenkins Groovy script REST API.
If you want to use a declarative approach I suggest you to use code like this.
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
sh "command"
}
post {
failure {
// set status
unstable 'rsync was unsuccessful'
}
always {
echo "Do something at the end of stage"
}
}
}
}
post {
always {
echo "Do something at the end of pipeline"
}
}
}
In case you want to keep everything in one stage use catchError
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
catchError(stageResult: 'UNSTABLE') {
sh "command"
}
sh "other command"
}
}
}
}
One easy way to set a build as unstable, is in your "execute shell" block, run exit 13
You can just call "exit 1", and the build will fail at that point and not continue. I wound up making a passthrough make function to handle it for me, and call safemake instead of make for building:
function safemake {
make "$#"
if [ "$?" -ne 0 ]; then
echo "ERROR: BUILD FAILED"
exit 1
else
echo "BUILD SUCCEEDED"
fi
}

Resources